<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>blog dot information dash superhighway dot net</title>
    <link>https://blog.information-superhighway.net/</link>
    <description></description>
    <pubDate>Thu, 16 Apr 2026 04:24:14 +0000</pubDate>
    <item>
      <title>On The Need For Understanding</title>
      <link>https://blog.information-superhighway.net/on-the-need-for-understanding</link>
      <description>&lt;![CDATA[I saw this Mastodon post from Andy Wingo recently:&#xA;&#xA;  in these days of coding agents and what-not, i often think of gerald sussman&#39;s comment that one no longer constructs systems from known parts, that instead one does basic science on the functionality of foreign libraries; he was right then and i hate it as much as i did 16 years ago&#xA;&#xA;I started drafting a reply, but it quickly began to spiral into a full-blown essay. It turns out that I have a lot of related thoughts that I&#39;ve been meaning to get down for a while.&#xA;&#xA;From the linked blog post:&#xA;&#xA;  ... Costanza asked Sussman why MIT had switched away from Scheme for their introductory programming course, 6.001. This was a gem. He said that the reason that happened was because engineering in 1980 was not what it was in the mid-90s or in 2000. In 1980, good programmers spent a lot of time thinking, and then produced spare code that they thought should work. Code ran close to the metal, even Scheme -- it was understandable all the way down. Like a resistor, where you could read the bands and know the power rating and the tolerance and the resistance and V=IR and that&#39;s all there was to know. 6.001 had been conceived to teach engineers how to take small parts that they understood entirely and use simple techniques to compose them into larger things that do what you want.&#xA;    But programming now isn&#39;t so much like that, said Sussman. Nowadays you muck around with incomprehensible or nonexistent man pages for software you don&#39;t know who wrote. You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts. This is a fundamentally different job, and it needed a different course.&#xA;&#xA;(For a primary source, here&#39;s a video of Sussman answering the same question at a different event, starting at 59:35.)&#xA;&#xA;I have really weird feelings about this assertion. It is obviously true that the software stack that most of the world runs on is a towering mess of leaky abstractions, a sprawling mass of haphazardly-interconnected components that nobody could claim to fully understand. It&#39;s also often the case that documentation is wildly inadequate, and experiment can sometimes be the simplest route to clearer understanding.&#xA;&#xA;But the implication is almost that we can&#39;t understand the components we&#39;re building on top of, that this way of working died in the 90s and now the job of programming is overwhelmingly just poking at stuff until it seems to work, and that&#39;s just the complete opposite of my personal experience.&#xA;!--more--&#xA;&#xA;I was born in 1982, and started programming pretty much as soon as I could read. For many years starting out I really didn&#39;t understand how anything worked, and things got more complicated much faster than I could make sense of them. I started out programming BASIC on 8-bit computers like the VIC-20 and Apple II; programming meant sequencing commands that made Things Happen, and if BASIC didn&#39;t have a command for it, I couldn&#39;t do it. I knew assembly language existed, but that was a different universe; when I would occasionally find myself dropped into the Apple II machine code monitor, the only thing I could figure out to do with it was frantically mash Ctrl+Reset to attempt to get back to a BASIC prompt again.&#xA;&#xA;Later I got a 286, and I found myself straining against the limitations of QBASIC. It had more commands, which meant I could do more things, but it wasn&#39;t enough for me. I managed to get my hands on a copy of Microsoft QuickC, assuming that my experience with QBASIC would transfer over to this more powerful language that surely had far more commands that would let me do more things, but was dismayed to discover that the help files somehow didn&#39;t reveal any graphics commands at all!&#xA;&#xA;Eventually I got access to BBSes, and found some programming tutorials that explained how VGA worked, and even got my hands on a shareware BASIC compiler that let me try some of these ideas out. I still didn&#39;t feel like I really understand what was going on, but I had names for concepts at least, and I was able to get pixels on the screen. I even wrote a terrible paint program to create those pixels with, because parsing real graphics formats was beyond me, but I could manage &#34;width, height, pixels&#34;.&#xA;&#xA;Then I got a 486 and the internet and suddenly if I wanted to use any newer tools, none of the pieces I had struggled so hard to understand worked anymore. The old stuff was still there, but it was buried under a whole new layer of complexity. I spent so much time totally lost, my accumulated knowledge repeatedly crumbling to dust, the Next Important Thing I was supposed to learn hopelessly out of reach. I would get books on assembly language or win32 and they would fill me with despair. I would read people posting in newsgroups about DPMI, and how you couldn&#39;t make BIOS calls without switching between real mode and protected mode, and wonder how the fuck anyone was supposed to get a pixel on the screen.&#xA;&#xA;This was the mid-90s; about the timeframe when Sussman would have stopped teaching 6.001. The paraphrasing of the blog post is a bit less clear, but in the video Sussman is explicit that this is exactly the timeframe where he saw engineering changing in fundamental ways. This was not easy to deal with as a teenager who was trying to learn this stuff!&#xA;&#xA;All I wanted in the world was to mess around with stuff until it sort of worked. Understanding was for chumps. I didn&#39;t want to think about the problem space, or the messy realities of my platform of choice. I wanted the computer to Do Thing. I wanted libraries and languages with a simple face, that would solve problems for me without me having to think them through, because the amount of shit that didn&#39;t make sense to me was so overwhelming. I just wanted it to be easy.&#xA;&#xA;I would like to share with you a personal accomplishment from this era. I was proud of it then. I am not proud now, but remembering it helps me remember what things were really like for me, when I didn&#39;t have the skills and mindset I have now.&#xA;&#xA;DJGPP and Allegro had gotten me back into the comfortable world of using Commands that Did Things to make games. I was using Scream Tracker to write music, and I wanted to use the popular Mikmod library to put that music into my games. There was a companion library called &#34;Mikalleg&#34; which took care of connecting the sound output routines to Allegro&#39;s audio engine, and did some preprocessor trickery to work around the fact that both Mikmod and Allegro defined a struct called SAMPLE.&#xA;&#xA;I found it all kind of complicated to use. Mikmod / Mikalleg tended to provide flexible APIs that had a lot of confusing parameters. I didn&#39;t want to have to understand what all that stuff was; I couldn&#39;t understand why they didn&#39;t just give you a function that started playing the music you told it to play. So I figured out some values that worked, and wrote wrapper functions that removed all that flexibility and just did exactly the thing that I assumed everyone would want.&#xA;&#xA;This wasn&#39;t my only innovation. Mikalleg didn&#39;t interoperate with Allegro&#39;s facility for compressed data files. I wanted it to. (Real Games didn&#39;t just leave music files lying around that you could easily listen to or modify!) Mikmod had a generic I/O interface, so that you could override the file loading routine by providing &#34;read&#34; and &#34;write&#34; calls. It looked complicated, so I ignored it. Instead I wrote a function that would take the data in the Allegro file - which would have been decompressed and loaded into RAM at this point - write it to a file on whatever disk you happened to be using, tell Mikmod to load the file, and then delete it. I knew this probably wasn&#39;t an ideal solution, but gosh, it worked great! And it was so easy to do! Why would I ever want to spend so much effort to do things &#34;right&#34;?&#xA;&#xA;I proudly packaged this all up into a library I called &#34;Easymik&#34;. I made a webpage for it and everything. The Mikalleg author even prominently linked to it from the Mikalleg home page! People were desperate for a solution to this datafile problem, and I built one!&#xA;&#xA;I wanted, so badly, to be a Real Programmer. I knew what tools Real Programmers used; they used C and assembly. I knew what Real Programs looked like; there was an .exe file, and DOS4GW for some reason, and special data files in special formats, never .S3M files that I could listen to, never .PCX files that I could open in my paint programs, never text files that I could look at and edit. I didn&#39;t know why things were this way, just that they were, so the closer I could get to that, the more Real my programs became, the more likely it was that I would be Good At It, that I could make the computer do anything and everything I could imagine.&#xA;&#xA;If you had given me a magic box that I could ask to write programs for me, that generated code that I didn&#39;t understand, that sort of worked but might have weird problems, that I could pester with questions about esoteric technical subjects until it gave me reasonable-sounding-but-maybe-wrong answers that were on my level, I would have been delirious with joy. I would have shaken the devil&#39;s hand, weeping with gratitude, and leapt face-first into vibe coding with a ferocity you could scarcely imagine. Sure, it&#39;s a bit shit, but all of the resources I had access to were shit.&#xA;&#xA;I could have gotten stuck that way. I could have flailed around, not understanding, not wanting to understand, for many more years than I did. But I think there were a few events sent me on a better trajectory.&#xA;&#xA;The first event was that I improbably landed a programming job working with QNX while I was still in high school. By that time I&#39;d learned enough C to be dangerous; I&#39;d mostly figured out how to use pointers, and could use malloc and free even though I couldn&#39;t have told you the difference between the stack and the heap. (I have a vivid sense memory of the panic and terror I felt when a coworker casually dropped one of those terms while talking to me about something.)&#xA;&#xA;QNX was everything MS-DOS and Windows were not; clear, elegant, understandable, reliable. I read Rob Krten&#39;s book and said to myself, &#34;of course this is how an operating system should work.&#34; I never managed to write a win32 program, but I could wrap my head around Photon. It proved to me that things didn&#39;t need to be so awful; that you could make a great many things wildly easier to understand simply by finding a good design.&#xA;&#xA;The only problem was I didn&#39;t know how to find a good design. I wanted to know, so badly, but the advice I was reading didn&#39;t make any sense to me; lots of people just asserting that you should build things a certain way, follow some arbitrary rules, often contradicting each other, usually in vague terms I didn&#39;t really understand. I got very interested in alternative programming languages and tools - DSLs, Lisp, Forth - stuff that had a tangible form, but was spoken about as though it was magic. Surely they&#39;d cracked the problem.&#xA;&#xA;At my next job, I felt like I was ready to try out some of the concepts I was reading about. I had just finished the unenviable task of patching dozens of bespoke tools to support a new peripheral. These tools had all been haphazardly built by cloning the last one and swizzling its code to support the new product; at its core there were really only two use cases. I proposed a single, unified codebase to replace them all. I spent months building a generic tool-building toolkit, infinitely configurable via XML. Solving 80% of the problem was reasonably straightforward, but the last 20% had me doing absurd contortions like defining a bidirectional algebra for specifying the structure of a serial number string. I was undaunted; I powered through and ended up with something that worked pretty well for the first use case, even if writing the XML was a little bit tricky sometimes.&#xA;&#xA;When I extended the system to handle the second use case, though, it became apparent that I had begun to run afoul of Greenspun&#39;s tenth rule. I had defined a dynamic Value class that could contain Anything, so that I could dynamically define the data schemas for binary log files in XML, rather than C++. This ended up taking a few hundred kilobytes of logs, and running my beefy development PC completely out of RAM just trying to parse them. It was a fiasco.&#xA;&#xA;I eventually managed to optimize it enough that it could ship; IIRC, I ended up giving up and hardcoding the schema. The way I&#39;d constructed everything, this meant that everyone touching the one unified generic codebase would have to add product-specific code anyway, exactly the thing I worked for months to avoid. It was obviously inferior to its replacement.  I&#39;d worked so hard to do the elegant thing, to build a reusable system out of simple, beautiful pieces, and I had produced a huge blob of pointless complexity which had only made life harder for myself and whoever would have to work with it next.&#xA;&#xA;Everything still felt so hard, but I was more convinced than ever that the secret to making everything easier was out there, that better tools were the answer. Propelled by a conviction that domain-specific languages were going to be the answer to everything, I moved thousands of kilometers away, temporarily immigrated to the United States, and landed an even more unlikely job where a billionaire paid me to fail to fix a tree-merging algorithm for five years.&#xA;&#xA;It turns out that if you are not thinking about things the right way, it is possible to spin your wheels struggling with a single problem for a very, very, very long time.&#xA;&#xA;It was at that job that, for the first time, I finally confronted the fear that had been silently driving me throughout my career - the fear of truly digging into the problem. The fear of admitting when I don&#39;t really understand something. The fear that it will all be too complicated and overwhelming to deal with. I realized, finally, that what I had secretly been yearning for all along, the dream I had spent years of my life trying to realize, that I uprooted my life to pursue, was some magical tool, some impossible technique, that would free me from the need to learn things that felt too hard and scary.&#xA;&#xA;But not learning? Trying to build something without understanding it? It turns out that&#39;s so much harder.&#xA;&#xA;Nancy comic strip. Panel 1: Nancy at her desk, scowling, thinking: &#34;How can I get out of having to think hard?&#34; Panel 2: Nancy walking home, brows furrowed, thinking: &#34;How&#34; Panel 3: Nancy lying in bed, on top of the covers, staring at the ceiling: &#34;How&#34;&#xA;&#xA;You must understand a problem before you can solve it. That lesson is core to everything I have done since. If a component isn&#39;t working the way you expect, you have to dig in and figure out how it actually works, what it&#39;s actually doing, or you have just given yourself a problem that you refuse to understand. If the problem is of any importance, then that is a terrible mistake. I don&#39;t want to waste years of my life spinning my wheels like that ever again.&#xA;&#xA;Everything gets easier once you commit to understanding how things work. The more you do it, the easier it gets. The more you learn, the more you understand, the more you can accomplish. This is the magical tool I spent half my career searching for.&#xA;&#xA;I printed out the checklist from Polya&#39;s How To Solve It and started referring to it whenever I sat down to work. I never read the rest of the book. It&#39;s just a handful of questions to ask yourself, some suggestions to help make sure you have a handle on things, but it&#39;s powerful stuff. I had been slowly making progress with the tree-merging algorithm, finding bugs with an ad-hoc randomized testing system (basically QuickCheck without a shrinking pass, so every failed test case was a monster that took days to untangle). I finally got things to a point where I wasn&#39;t hitting data loss or crash bugs, but the output was still pretty far from ideal. I finally confronted some core assumptions about the tree-merge algorithm that the codebase had started with, and which I had never felt comfortable enough to question, and realized that they had been pointlessly making my job wildly more difficult. I tore the whole thing apart, the fragile thing I had spent years patching, the thing I had finally managed to get sort-of working, and rebuilt it from scratch.&#xA;&#xA;It worked flawlessly.&#xA;&#xA;The main thing I now bring to the table, as a software professional, is the ability to understand what&#39;s really going on. Sometimes, yes, this involves doing basic science to answer a question, but I usually find it hard to be satisfied with an answer until I have gone down to the source code of the confusing component to really understand it. This is an unreasonably effective skill that has only become more relevant over the years, not less.&#xA;&#xA;Sussman correctly identifies the 90s as a time when complexity exploded in engineering, both electrical and software. But I&#39;m not willing to grant that we have been entirely unsuccessful at taming the complexity in the intervening decades. You will never convince me that the state of affairs now, from a programmer&#39;s perspective, is worse than writing Windows 95 apps in C. The huge thing that separates the complexity of the 90s from the complexity we have now was that, in the 90s, the inner workings of everything was secret. You didn&#39;t get the source code to Windows 95, you programmed to the docs, and when it had bugs, or didn&#39;t behave the way the docs said it did, or you just didn&#39;t understand how it was supposed to work, there was virtually nothing else you could do but poke at your code until it stopped triggering the problem. Breaking out the kernel debugger and reverse engineering your operating system is not a viable approach for most people!&#xA;&#xA;But this is not the same situation software developers are in today. Yes, we have huge libraries, more now than ever! But most are open source; when they don&#39;t behave as we expect, it is orders of magnitude easier to figure out why than it used to be. Most are more reliable than they were then, either by being written in languages that make writing reliable software easier, or through simply having been around for decades, with most major bugs slowly killed off through sheer force of attrition. Components worth using in 2025 usually have a coherent way in which they are designed to work, they will usually work in that way, and it is usually possible to learn what that is. There is still plenty of garbage out there, of course - you should choose your dependencies carefully! - but you are absolutely spoiled for choice in comparison to the 90s.&#xA;&#xA;I did some Android UI programming about a decade ago now. I would write custom View subclasses from time to time, and every so often I would struggle with bugs where the UI layout would need to be recomputed, but it just wouldn&#39;t happen. The documentation was useless). I would try to sprinkle around even more calls to requestLayout or forceLayout, but nothing seemed to help. &#xA;&#xA;Finally I got fed up and dug into the source. Turns out the way it works under the hood is that requestLayout and forceLayout just set a few flags on the view object; they don&#39;t &#34;schedule&#34; anything, contrary to the docs. requestLayout recursively calls itself on the parent, which is what gives the signal to Android the next time it goes to draw the screen that some stuff needs laying out again. forceLayout sets the same flags, but only on itself.&#xA;&#xA;I don&#39;t know what actual use forceLayout is meant to have, because in practice, what it actually means is &#34;if one of my descendants calls requestLayout, stop propagating that flag upwards once you reach me&#34;. It&#39;s a method whose sole purpose seems to be to create layout bugs that are impossible to track down. I also remember there being a data race, where if you called one of those functions while layout was happening, the flags of the various views in the tree could get in funky inconsistent states, and everything would get screwed up.&#xA;&#xA;After I read the code, I finally understood how Android&#39;s layout algorithm actually worked - exactly when measurement passes ran, what was triggering them, what I needed to avoid doing in them. I could rework my custom views so they wouldn&#39;t trip this problem anymore. I understood weird behaviour I had experienced in other_ situations. Running experiments, poking things, doing science - that got me brittle, buggy code that took forever to get working and that I was afraid to mess with. Opening up the black box gave me confidence.&#xA;&#xA;It is still possible to build a complex piece of software that works out of simpler pieces of software that work. You can choose dependencies that are understandable. Programming in 2025 does not have to be about fumbling in the dark. Everything gets easier if you&#39;re willing to turn on the light.]]&gt;</description>
      <content:encoded><![CDATA[<p>I saw this <a href="https://mastodon.social/@wingo/115536049662405712">Mastodon post from Andy Wingo</a> recently:</p>

<blockquote><p>in these days of coding agents and what-not, i often think of gerald sussman&#39;s comment that one no longer constructs systems from known parts, that instead one does basic science on the functionality of foreign libraries; he was right then and i hate it as much as i did 16 years ago</p></blockquote>

<p>I started drafting a reply, but it quickly began to spiral into a full-blown essay. It turns out that I have a lot of related thoughts that I&#39;ve been meaning to get down for a while.</p>

<p>From the <a href="https://wingolog.org/archives/2009/03/24/international-lisp-conference-day-two">linked blog post</a>:</p>

<blockquote><p>... Costanza asked Sussman why MIT had switched away from Scheme for their introductory programming course, 6.001. This was a gem. He said that the reason that happened was because engineering in 1980 was not what it was in the mid-90s or in 2000. In 1980, good programmers spent a lot of time thinking, and then produced spare code that they thought should work. Code ran close to the metal, even Scheme — it was understandable all the way down. Like a resistor, where you could read the bands and know the power rating and the tolerance and the resistance and V=IR and that&#39;s all there was to know. 6.001 had been conceived to teach engineers how to take small parts that they understood entirely and use simple techniques to compose them into larger things that do what you want.</p>

<p>But programming now isn&#39;t so much like that, said Sussman. Nowadays you muck around with incomprehensible or nonexistent man pages for software you don&#39;t know who wrote. You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts. This is a fundamentally different job, and it needed a different course.</p></blockquote>

<p>(For a primary source, here&#39;s <a href="https://vimeo.com/151465912?fl=pl&amp;fe=sh#t=59m35s">a video of Sussman answering the same question at a different event</a>, starting at 59:35.)</p>

<p>I have <em>really weird</em> feelings about this assertion. It is obviously true that the software stack that most of the world runs on is a towering mess of leaky abstractions, a sprawling mass of haphazardly-interconnected components that nobody could claim to fully understand. It&#39;s also often the case that documentation is wildly inadequate, and experiment can sometimes be the simplest route to clearer understanding.</p>

<p>But the implication is almost that we <em>can&#39;t</em> understand the components we&#39;re building on top of, that this way of working died in the 90s and now the job of programming is overwhelmingly just poking at stuff until it seems to work, and that&#39;s just the complete opposite of my personal experience.
</p>

<p>I was born in 1982, and started programming pretty much as soon as I could read. For many years starting out I really didn&#39;t understand how anything worked, and things got more complicated <em>much</em> faster than I could make sense of them. I started out programming BASIC on 8-bit computers like the VIC-20 and Apple II; programming meant sequencing commands that made Things Happen, and if BASIC didn&#39;t have a command for it, I couldn&#39;t do it. I knew assembly language existed, but that was a different universe; when I would occasionally find myself dropped into the Apple II machine code monitor, the only thing I could figure out to do with it was frantically mash Ctrl+Reset to attempt to get back to a BASIC prompt again.</p>

<p>Later I got a 286, and I found myself straining against the limitations of QBASIC. It had more commands, which meant I could do more things, but it wasn&#39;t enough for me. I managed to get my hands on a copy of Microsoft QuickC, assuming that my experience with QBASIC would transfer over to this more powerful language that surely had far more commands that would let me do more things, but was dismayed to discover that the help files somehow didn&#39;t reveal any graphics commands <em>at all</em>!</p>

<p>Eventually I got access to BBSes, and found some <a href="https://wiki.nox-rhea.org/back2root/ibm-pc-ms-dos/denthor">programming tutorials</a> that explained how VGA worked, and even got my hands on a <a href="https://en.wikipedia.org/wiki/ASIC_programming_language">shareware BASIC compiler</a> that let me try some of these ideas out. I still didn&#39;t feel like I really understand what was going on, but I had names for concepts at least, and I was able to get pixels on the screen. I even wrote a terrible paint program to create those pixels with, because parsing real graphics formats was beyond me, but I could manage “width, height, pixels”.</p>

<p>Then I got a 486 and the internet and suddenly if I wanted to use any newer tools, none of the pieces I had struggled so hard to understand worked anymore. The old stuff was still there, but it was buried under a whole new layer of complexity. I spent so much time totally lost, my accumulated knowledge repeatedly crumbling to dust, the Next Important Thing I was supposed to learn hopelessly out of reach. I would get books on assembly language or win32 and they would fill me with despair. I would read people posting in newsgroups about DPMI, and how you couldn&#39;t make BIOS calls without switching between real mode and protected mode, and wonder how the fuck anyone was supposed to get a pixel on the screen.</p>

<p>This was the mid-90s; about the timeframe when Sussman would have stopped teaching 6.001. The paraphrasing of the blog post is a bit less clear, but in the video <a href="https://vimeo.com/151465912?fl=pl&amp;fe=sh#t=1h1m53s">Sussman is explicit</a> that this is exactly the timeframe where he saw engineering changing in fundamental ways. This was not easy to deal with as a teenager who was trying to learn this stuff!</p>

<p>All I wanted in the world was to mess around with stuff until it sort of worked. Understanding was for chumps. I didn&#39;t want to think about the problem space, or the messy realities of my platform of choice. I wanted the computer to Do Thing. I wanted libraries and languages with a simple face, that would solve problems for me without me having to think them through, because the amount of shit that didn&#39;t make sense to me was <em>so overwhelming</em>. I just wanted it to be easy.</p>

<p>I would like to share with you a personal accomplishment from this era. I was proud of it then. I am not proud now, but remembering it helps me remember what things were really like for me, when I didn&#39;t have the skills and mindset I have now.</p>

<p><a href="https://fringe.games/episodes/ep6-shawn-hargreaves.html">DJGPP and Allegro</a> had gotten me back into the comfortable world of using Commands that Did Things to make games. I was using <a href="https://en.wikipedia.org/wiki/Scream_Tracker">Scream Tracker</a> to write music, and I wanted to use the popular <a href="https://mikmod.sourceforge.net/">Mikmod</a> library to put that music into my games. There was a companion library called “<a href="https://web.archive.org/web/19990221042217/http://www.geocities.com/SiliconValley/Vista/8890/mikalg.html">Mikalleg</a>” which took care of connecting the sound output routines to Allegro&#39;s audio engine, and did some preprocessor trickery to work around the fact that both Mikmod and Allegro defined a struct called <code>SAMPLE</code>.</p>

<p>I found it all kind of complicated to use. Mikmod / Mikalleg tended to provide flexible APIs that had a lot of confusing parameters. I didn&#39;t want to have to understand what all that stuff was; I couldn&#39;t understand why they didn&#39;t just give you a function that started playing the music you told it to play. So I figured out some values that worked, and wrote wrapper functions that removed all that flexibility and just did exactly the thing that I assumed everyone would want.</p>

<p>This wasn&#39;t my only innovation. Mikalleg didn&#39;t interoperate with Allegro&#39;s facility for compressed data files. I wanted it to. (Real Games didn&#39;t just leave music files lying around that you could easily listen to or modify!) Mikmod had a generic I/O interface, so that you could override the file loading routine by providing “read” and “write” calls. It looked complicated, so I ignored it. Instead I wrote a function that would take the data in the Allegro file – which would have been decompressed and loaded into RAM at this point – write it to a file on whatever disk you happened to be using, tell Mikmod to load the file, and then delete it. I knew this probably wasn&#39;t an ideal solution, but gosh, it worked great! And it was so easy to do! Why would I ever want to spend so much effort to do things “right”?</p>

<p>I proudly packaged this all up into a library I called “Easymik”. I made a webpage for it and everything. The Mikalleg author even prominently linked to it from the Mikalleg home page! People were desperate for a solution to this datafile problem, and I built one!</p>

<p>I wanted, so badly, to be a Real Programmer. I knew what tools Real Programmers used; they used C and assembly. I knew what Real Programs looked like; there was an .exe file, and DOS4GW for some reason, and special data files in special formats, never .S3M files that I could listen to, never .PCX files that I could open in my paint programs, never text files that I could look at and edit. I didn&#39;t know <em>why</em> things were this way, just that they were, so the closer I could get to that, the more Real my programs became, the more likely it was that I would be Good At It, that I could make the computer do anything and everything I could imagine.</p>

<p>If you had given me a magic box that I could ask to write programs for me, that generated code that I didn&#39;t understand, that sort of worked but might have weird problems, that I could pester with questions about esoteric technical subjects until it gave me reasonable-sounding-but-maybe-wrong answers that were on my level, I would have been <em>delirious</em> with joy. I would have shaken the devil&#39;s hand, weeping with gratitude, and leapt face-first into vibe coding with a ferocity you could scarcely imagine. Sure, it&#39;s a bit shit, but <em>all of the resources I had access to were shit</em>.</p>

<p>I could have gotten stuck that way. I could have flailed around, not understanding, not wanting to understand, for many more years than I did. But I think there were a few events sent me on a better trajectory.</p>

<p>The first event was that I improbably landed a programming job working with QNX while I was still in high school. By that time I&#39;d learned enough C to be dangerous; I&#39;d mostly figured out how to use pointers, and could use <code>malloc</code> and <code>free</code> even though I couldn&#39;t have told you the difference between the stack and the heap. (I have a vivid sense memory of the panic and terror I felt when a coworker casually dropped one of those terms while talking to me about something.)</p>

<p>QNX was everything MS-DOS and Windows were not; clear, elegant, understandable, reliable. I read <a href="https://archive.org/details/gettingstartedwi0000krte">Rob Krten&#39;s book</a> and said to myself, “of <em>course</em> this is how an operating system should work.” I never managed to write a win32 program, but I could wrap my head around Photon. It proved to me that things didn&#39;t need to be so awful; that you could make a great many things wildly easier to understand simply by finding a good design.</p>

<p>The only problem was I didn&#39;t know <em>how</em> to find a good design. I wanted to know, <em>so badly</em>, but the advice I was reading didn&#39;t make any sense to me; lots of people just <em>asserting</em> that you should build things a certain way, follow some arbitrary rules, often contradicting each other, usually in vague terms I didn&#39;t really understand. I got very interested in alternative programming languages and tools – DSLs, Lisp, Forth – stuff that had a tangible form, but was spoken about as though it was magic. Surely they&#39;d cracked the problem.</p>

<p>At my next job, I felt like I was ready to try out some of the concepts I was reading about. I had just finished the unenviable task of patching dozens of bespoke tools to support a new peripheral. These tools had all been haphazardly built by cloning the last one and swizzling its code to support the new product; at its core there were really only two use cases. I proposed a single, unified codebase to replace them all. I spent months building a generic tool-building toolkit, infinitely configurable via XML. Solving 80% of the problem was reasonably straightforward, but the last 20% had me doing absurd contortions like defining a bidirectional algebra for specifying the structure of a serial number string. I was undaunted; I powered through and ended up with something that worked pretty well for the first use case, even if writing the XML was a little bit tricky sometimes.</p>

<p>When I extended the system to handle the second use case, though, it became apparent that I had begun to run afoul of Greenspun&#39;s tenth rule. I had defined a dynamic <code>Value</code> class that could contain Anything, so that I could dynamically define the data schemas for binary log files in XML, rather than C++. This ended up taking a few hundred kilobytes of logs, and running my beefy development PC completely out of RAM just trying to parse them. It was a fiasco.</p>

<p>I eventually managed to optimize it enough that it could ship; IIRC, I ended up giving up and hardcoding the schema. The way I&#39;d constructed everything, this meant that everyone touching the one unified generic codebase would have to add product-specific code anyway, exactly the thing I worked for months to avoid. It was obviously inferior to its replacement.  I&#39;d worked <em>so hard</em> to do the elegant thing, to build a reusable system out of simple, beautiful pieces, and I had produced a huge blob of pointless complexity which had only made life harder for myself and whoever would have to work with it next.</p>

<p>Everything still felt so hard, but I was more convinced than ever that the secret to making everything easier was out there, that better tools were the answer. Propelled by a conviction that domain-specific languages were going to be the answer to everything, I moved thousands of kilometers away, temporarily immigrated to the United States, and landed an even more unlikely job where a billionaire paid me to fail to fix a tree-merging algorithm for five years.</p>

<p>It turns out that if you are not thinking about things the right way, it is possible to spin your wheels struggling with a single problem for a very, very, <em>very</em> long time.</p>

<p>It was at that job that, for the first time, I finally confronted the fear that had been silently driving me throughout my career – the fear of truly digging into the problem. The fear of admitting when I don&#39;t really understand something. The fear that it will all be too complicated and overwhelming to deal with. I realized, finally, that what I had secretly been yearning for all along, the dream I had spent years of my life trying to realize, that I uprooted my life to pursue, was some magical tool, some impossible technique, that would free me from the need to learn things that felt too hard and scary.</p>

<p>But <em>not</em> learning? Trying to build something <em>without understanding it</em>? It turns out that&#39;s <em>so much harder</em>.</p>

<p><img src="https://information-superhighway.net/nancy-how-can-i-get-out-of-having-to-think.png" alt="Nancy comic strip. Panel 1: Nancy at her desk, scowling, thinking: &#34;How can I get out of having to think hard?&#34; Panel 2: Nancy walking home, brows furrowed, thinking: &#34;How&#34; Panel 3: Nancy lying in bed, on top of the covers, staring at the ceiling: &#34;How&#34;"></p>

<p>You <em>must</em> understand a problem before you can solve it. That lesson is core to everything I have done since. If a component isn&#39;t working the way you expect, you <em>have</em> to dig in and figure out how it <em>actually</em> works, what it&#39;s <em>actually</em> doing, or you have just given yourself a problem that you refuse to understand. If the problem is of any importance, then that is a <em>terrible</em> mistake. I don&#39;t want to waste years of my life spinning my wheels like that <em>ever again</em>.</p>

<p><em>Everything gets easier</em> once you commit to understanding how things work. The more you do it, the easier it gets. The more you learn, the more you understand, the more you can accomplish. <em>This</em> is the magical tool I spent half my career searching for.</p>

<p>I printed out <a href="https://archive.org/details/polya-how-to-solve-it/page/n7/mode/1up">the checklist from Polya&#39;s How To Solve It</a> and started referring to it whenever I sat down to work. I never read the rest of the book. It&#39;s just a handful of questions to ask yourself, some suggestions to help make sure you have a handle on things, but it&#39;s powerful stuff. I had been slowly making progress with the tree-merging algorithm, finding bugs with an ad-hoc randomized testing system (basically <a href="https://en.wikipedia.org/wiki/QuickCheck">QuickCheck</a> without a shrinking pass, so every failed test case was a monster that took days to untangle). I finally got things to a point where I wasn&#39;t hitting data loss or crash bugs, but the output was still pretty far from ideal. I finally confronted some core assumptions about the tree-merge algorithm that the codebase had started with, and which I had never felt comfortable enough to question, and realized that they had been pointlessly making my job wildly more difficult. I tore the whole thing apart, the fragile thing I had spent years patching, the thing I had finally managed to get sort-of working, and rebuilt it from scratch.</p>

<p>It worked flawlessly.</p>

<p>The main thing I now bring to the table, as a software professional, is the ability to understand what&#39;s really going on. Sometimes, yes, this involves doing basic science to answer a question, but I usually find it hard to be satisfied with an answer until I have gone down to the source code of the confusing component to really understand it. This is an unreasonably effective skill that has only become <em>more</em> relevant over the years, not less.</p>

<p>Sussman correctly identifies the 90s as a time when complexity exploded in engineering, both electrical and software. But I&#39;m not willing to grant that we have been entirely unsuccessful at taming the complexity in the intervening decades. You will never convince me that the state of affairs now, from a programmer&#39;s perspective, is <em>worse</em> than writing Windows 95 apps in C. The huge thing that separates the complexity of the 90s from the complexity we have now was that, in the 90s, the inner workings of everything was <em>secret</em>. You didn&#39;t get the source code to Windows 95, you programmed to the docs, and when it had bugs, or didn&#39;t behave the way the docs said it did, or you just didn&#39;t understand how it was supposed to work, there was <em>virtually nothing else you could do</em> but poke at your code until it stopped triggering the problem. Breaking out the kernel debugger and reverse engineering your operating system is not a viable approach for most people!</p>

<p>But this is not the same situation software developers are in today. Yes, we have huge libraries, more now than ever! But most are open source; when they don&#39;t behave as we expect, it is <em>orders of magnitude</em> easier to figure out why than it used to be. Most are more reliable than they were then, either by being written in languages that make writing reliable software easier, or through simply having been around for decades, with most major bugs slowly killed off through sheer force of attrition. Components worth using in 2025 usually have a coherent way in which they are designed to work, they will usually work in that way, and it is usually possible to learn what that is. There is still plenty of garbage out there, of course – you should choose your dependencies carefully! – but you are absolutely spoiled for choice in comparison to the 90s.</p>

<p>I did some Android UI programming about a decade ago now. I would write custom <code>View</code> subclasses from time to time, and every so often I would struggle with bugs where the UI layout would need to be recomputed, but it just wouldn&#39;t happen. The <a href="https://web.archive.org/web/20110216135059/http://developer.android.com/reference/android/view/View.html#requestLayout()">documentation was useless</a>. I would try to sprinkle around even more calls to <code>requestLayout</code> or <code>forceLayout</code>, but nothing seemed to help.</p>

<p>Finally I got fed up and dug into the source. Turns out the way it works under the hood is that <code>requestLayout</code> and <code>forceLayout</code> just set a few flags on the view object; they don&#39;t “schedule” anything, contrary to the docs. <code>requestLayout</code> recursively calls itself on the parent, which is what gives the signal to Android the next time it goes to draw the screen that some stuff needs laying out again. <code>forceLayout</code> sets the same flags, but only on itself.</p>

<p>I don&#39;t know what actual use <code>forceLayout</code> is meant to have, because in practice, what it <em>actually</em> means is “if one of my descendants calls <code>requestLayout</code>, stop propagating that flag upwards once you reach me”. It&#39;s a method whose sole purpose seems to be to create layout bugs that are impossible to track down. I also remember there being a data race, where if you called one of those functions <em>while layout was happening</em>, the flags of the various views in the tree could get in funky inconsistent states, and everything would get screwed up.</p>

<p>After I read the code, I finally understood how Android&#39;s layout algorithm actually worked – exactly when measurement passes ran, what was triggering them, what I needed to avoid doing in them. I could rework my custom views so they wouldn&#39;t trip this problem anymore. I understood weird behaviour I had experienced in <em>other</em> situations. Running experiments, poking things, doing science – that got me brittle, buggy code that took forever to get working and that I was afraid to mess with. Opening up the black box gave me confidence.</p>

<p>It is still possible to build a complex piece of software that works out of simpler pieces of software that work. You can choose dependencies that are understandable. Programming in 2025 does not have to be about fumbling in the dark. Everything gets easier if you&#39;re willing to turn on the light.</p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/on-the-need-for-understanding</guid>
      <pubDate>Tue, 30 Dec 2025 14:25:54 +0000</pubDate>
    </item>
    <item>
      <title>A Forth vocabulary for iteration</title>
      <link>https://blog.information-superhighway.net/a-forth-vocabulary-for-iteration</link>
      <description>&lt;![CDATA[I recently wrote a small 16-bit Forth for 8086 PCs running DOS. I built the basic one-liner loop words that can trivially be built with just &#34;branch if zero&#34; and &#34;goto&#34;: begin, while, repeat, until, again. But I held off on implementing do / loop at first.&#xA;&#xA;It didn&#39;t seem like too much of a hardship. In a previous Forth I&#39;d built, I&#39;d implemented do / loop using the return stack, but it was... ugly. The code to implement it was ugly, the code it generated was ugly (and large!), and I didn&#39;t find a lot of places where it was actually much nicer to use than explicit begin-based loops. I was able to implement an 8086 assembler and a Minesweeper game without bothering to build do / loop. I didn&#39;t really miss it, but I had a design percolating in the back of my mind that I wanted to try.&#xA;&#xA;At some point I came across some writing that suggested that Forth had a &#34;loop control stack&#34;. Wouldn&#39;t it be nice if I could implement some kind of loop control stack that worked for all kinds of iteration?&#xA;&#xA;The thing I built has blown me away with how flexible, composable, and useful it&#39;s turned out to be. It&#39;s way more powerful than I was expecting. And the code that leverages it is inevitably much simpler and easier to read. !--more--&#xA;&#xA;The Stacks&#xA;&#xA;I added two loop control stacks - what I call the i-stack, and the next-stack. The i-stack contains the current value(s) being iterated over, and is read from with the i and j words like normal. The next-stack is where the magic happens.&#xA;&#xA;When iterating, the top value of the next-stack is a pointer to a small structure called an iterator. It&#39;s a very simple structure, only two cells. The first cell contains the execution token of a word that will either update the current values on the i-stack and return true, or remove its state from both stacks and return false. The second cell points to a cancellation function, that cleans up whatever state the iterator has kept on the two stacks without iterating further, and returns nothing.&#xA;&#xA;Iterators&#xA;&#xA;I built some simple helpers for creating iterators. It took a few tries to nail down this design, but I&#39;m happy with it now. defiter creates a &#34;blank&#34; iterator, which, when called, pushes itself to the next-stack. :iter does the same thing but allows you to write some code that accepts parameters and prepares the loop stacks first. :next defines a new anonymous &#34;next-word&#34; and assigns it to the most-recently defined iterator. :cancel does the same thing, but for cancellation.&#xA;&#xA;On its own, this is already quite nice. I&#39;ve got a page or so of basically-trivial iterators. Here&#39;s one:&#xA;&#xA;:iter times ( n -- )   i ;&#xA;:next i dup 1- i finish? ;&#xA;:cancel idrop nextdrop ;&#xA;&#xA;times keeps its state in the i-stack - it initializes itself by pushing the number of times to repeat onto it. When fetching the next value, it pops the current value off the i-stack, decrements it, and pushes it back, leaving the old value on the data stack. finish? is a simple helper word that peeks at the top of the stack and runs the current cancellation function if it&#39;s false, or in this case, if we&#39;ve already hit 0. Since cleaning up after an iterator is often the same job whether you&#39;re exiting early or not, this word is very handy. Explicitly defining cancellation for this iterator isn&#39;t actually necessary in my current implementation, because idrop nextdrop is common enough that I use it as the default.&#xA;&#xA;each / next&#xA;&#xA;I can use these iteration words (within a compiled definition) like this:&#xA;&#xA;5 times each i . next&#xA;( outputs: 4 3 2 1 0 )&#xA;&#xA;All the common loop types are easy to build in this system, as well as some uncommon ones:&#xA;&#xA;5 10 for each i . next ( outputs: 5 6 7 8 9 )&#xA;0 10 2 for+ each i . next ( outputs: 0 2 4 6 8 )&#xA;( pchars yields pointers to each byte in a zero-terminated string )&#xA;s&#34; hello&#34; pchars each i b@ emit next ( outputs: hello )&#xA;&#xA;Generic cancellation, of course, allows us to trivially implement break; just cancel the iteration at the top of the stack, and then jump to the each loop exit point, after next. continue is even simpler, just jump back to the top of the loop.&#xA;&#xA;5 times each i 3 &lt; if break then i . next ( outputs: 4 3 )&#xA;5 times each i 2 % if continue then i . next ( outputs 4 2 0 )&#xA;&#xA;Under the hood, each just calls the &#34;next-word&#34; of the iterator and jumps to the end of the loop if it returns 0 - conceptually identical to begin iterate while, with next meaning the same thing as repeat. This allows for iterators that return no values.&#xA;&#xA;0 times each i . next ( outputs: )&#xA;&#xA;Generators&#xA;&#xA;That&#39;s nice, but it&#39;s not exactly setting the world on fire; it&#39;s a fair amount of work just to end up with a few different ways of writing &#34;for&#34; loops in practice, that Forth systems have had forever anyway. Is it really worth the cost of this abstraction?&#xA;&#xA;Turns out, absolutely, yes, it is, because you can also build generators on it, and that blows things wide open.&#xA;&#xA;First, a simple example:&#xA;&#xA;: 5-2-8 (( 5 yield 2 yield 8 yield )) ;&#xA;5-2-8 each i . next ( outputs: 5 2 8 )&#xA;&#xA;(( defines the start of the generator, and )) defines the end (and pushes it onto the next-stack). Any valid Forth code goes in between. yield takes the top of the stack, pushes it onto the i-stack, and then suspends the generator until the next iteration. How does this work? Essentially, yield takes the top of the return stack and pushes it onto the next-stack, then pushes an iterator that pops it off the next-stack and pushes it back onto the return stack. The details get a little messier in order to support some more advanced use cases, but that&#39;s the simple idea at the core of it.&#xA;&#xA;OK, neat trick, we&#39;ve built ourselves a nice little coroutine-like system. But wait! It gets better! When yield resumes, it immediately removes all of its state from the iteration stacks. This means that generators can safely interact with any iterator that might be &#34;underneath&#34; it. They can iterate over things and yield in the middle! They can yield different things based on those values! We&#39;ve accidentally built an extremely powerful, totally generic map/filter capability!&#xA;&#xA;: doubled (( each i i + map next )) ;&#xA;5 times doubled each i . next ( outputs: 8 6 4 2 0 )&#xA;: odd (( each i 2 % filter next )) ;&#xA;5 times odd each i . next ( outputs: 3 1 )&#xA;&#xA;map and filter are more yield-like words - it turns out that there&#39;s a number of these that you might want to implement, with different logic for suspending, resuming, and cancelling. map saves the top of the i-stack onto the next-stack and replaces it with the input, restoring the original value after resuming (necessary since the iterator underneath might be using that value as its state). filter conditionally suspends based on the top of the data stack but otherwise doesn&#39;t touch the i-stack, leaving whatever iterator is running underneath to provide the value. Both of these words push iterators with special cancel logic that knows that there is another iterator underneath, and can cancel again recursively once they&#39;ve cleaned themselves up.&#xA;&#xA;Generator state&#xA;&#xA;This design can almost be made to work for generators that have extra state, but it&#39;s awkward and incomplete. You must ensure the data stack is clean whenever you yield, so you&#39;re forced to manually shuffle data to and from the next stack. Consider a filter that only returns values that are divisible by a certain number: &#xA;&#xA;: divisible-by ( n -- )   next &#xA;  (( next each i over % 0 = swap next filter &lt;next next drop )) ;&#xA;5 divisible-by 21 times each i . next ( ouputs: 20 15 10 5 0 )&#xA;&#xA;This works, but there&#39;s so much stack noise! And it breaks down if you need to cancel, because filter has no idea that there&#39;s extra stuff on the next-stack that it needs to clear. Ideally there would be some automatic way of keeping the state of the generator on the data stack while it&#39;s running, and push it safely away when we suspend. Could there be some way to write divisible-by like this?&#xA;&#xA;: divisible-by ( n -- )   arg (( each i over % 0 = filter next drop )) ;&#xA;&#xA;In fact, this code works in my implementation. The scheme to make this happen is a little bit subtle, but it can be done efficiently with a minimum of bookkeeping noise in most cases. I define a variable, gen-arg-count, that starts at zero.   arg is an immediate word that compiles a call to   next and increments that variable. Then, any time I compile a yielding word, I append the value of gen-arg-count to the instruction stream - much like lit. When suspending, the yielding word reads that value out of the instruction stream and transfers that many values from the data stack to the next-stack. Then it moves the pointer to the instruction stream from the return stack to the next-stack, and finally pushes the yielding iterator. That iterator then pulls the instruction pointer back off the next-stack to determine how many values to move from the next-stack back onto the data stack, as well as where to resume the instruction stream. Cancellation similarly can read the arg-count byte to know how many extra values to drop from the next-stack.&#xA;&#xA;Generators need to ensure the data stack is empty before exiting at )). At one point I considered having )) compile the appropriate number of drop calls automatically, but in the end I decided that it&#39;s reasonable and idiomatic to expect a generator to exit with a clean stack, like any other Forth word would.&#xA;&#xA;With this extension, it&#39;s trivial to write all kinds of new iterators - we could even do away with the base iterator system entirely and just express everything as generators. There are lots nice one-line definitions of times:&#xA;&#xA;( 1 ) : times ( n -- )   arg (( begin dup while 1- dup yield repeat drop )) ;&#xA;( 2 ) : times ( n -- )   next (( next begin dup while 1- yield repeat drop )) ;&#xA;( 3 ) : times ( n -- )   arg (( -arg begin dup while 1- yield  repeat drop )) ;&#xA;( 4 ) ( suspend ) &#39; noop ( resume ) &#39; noop ( cancel ) &#39; idrop :yield iyield&#xA;: times ( n -- )   i (( begin i while i 1- i iyield repeat idrop )) ;&#xA;&#xA;Definition 1 doesn&#39;t use anything I haven&#39;t already explained. The state of the iterator is managed on the data stack, and automatically shuffled back and forth from the next-stack by yield.&#xA;&#xA;Definition 2 adds a new word. yield   is a yielder that moves the yielded value from the i-stack back onto the data stack when it resumes, instead of dropping it. The state of the iterator starts on the next-stack but is moved to the i-stack once the iteration loop actually starts.&#xA;&#xA;Definition 3 is virtually the same as 2, but demonstrates the ability to handle changes in the amount of state. -arg is an immediate word that generates no code, but decrements gen-arg-count so that you can express that you&#39;ve consumed the argument and the next yield should preserve one less value on the data stack. (+arg is also defined, performing an increment, in case you generate more values on the stack than you started with.) &#xA;&#xA;Definition 4 is built to keep all state on the i-stack from the beginning. Here we use :yield to define a new yielding word. I realized I hadn&#39;t built a yielder that left the i-stack alone when resuming, but would drop the value when cancelling, so I added one.&#xA;&#xA;All of these options will correctly be cancelled if the code iterating over it calls break, with no special effort!&#xA;&#xA;Final thoughts&#xA;&#xA;With this scheme, generators always take up at least two spaces on the next-stack - one for the yielder&#39;s iterator, and one for the resume point. But if all iterators were defined as generators, and all yielding words had to be defined with :yield to ensure a uniform structure, we could just push the resume point. iterate and cancel could easily find the appropriate function pointer by looking next to the resume point for the address of the yielder and digging inside. I think this could be built in such a way that it would be basically as efficient as the existing scheme, at the cost of making the whole thing more complex to explain. It might be worth pursuing, because generators are so pleasant to read and write, and raw iterators are... less so. I basically never want to write a raw iterator besides the very basic ones that are built-in.&#xA;&#xA;All the source for my Forth system is available online; the iteration system is defined in iter.jrt. There are some interesting examples of generators in embed.jrt, dialer.jrt and rick.jrt - some highlights:&#xA;&#xA;rle-decode - takes a pointer to some run-length encoded packed data, yields a stream of values. Uses the times iterator internally to count off the repeated values.&#xA;menu-options - Provides a dynamic list of items to display in a menu. Yields 2 values at a time - the text to display, and the function to execute when the user selects it.&#xA;xmit-iter - Writes text to the screen with a small delay between each character, to simulate a slow serial connection. An extremely simple loop that can be driven by complex generation logic - including streaming RLE-encoded data with embedded colour information.&#xA;&#xA;#forth #code #essays]]&gt;</description>
      <content:encoded><![CDATA[<p>I recently wrote a small 16-bit Forth for 8086 PCs running DOS. I built the basic one-liner loop words that can trivially be built with just “branch if zero” and “goto”: <code>begin</code>, <code>while</code>, <code>repeat</code>, <code>until</code>, <code>again</code>. But I held off on implementing <code>do</code> / <code>loop</code> at first.</p>

<p>It didn&#39;t seem like too much of a hardship. In a previous Forth I&#39;d built, I&#39;d implemented <code>do</code> / <code>loop</code> using the return stack, but it was... ugly. The code to implement it was ugly, the code it generated was ugly (and large!), and I didn&#39;t find a lot of places where it was actually much nicer to use than explicit <code>begin</code>-based loops. I was able to implement an 8086 assembler and a Minesweeper game without bothering to build <code>do</code> / <code>loop</code>. I didn&#39;t really miss it, but I had a design percolating in the back of my mind that I wanted to try.</p>

<p>At some point I came across some writing that suggested that Forth had a “loop control stack”. Wouldn&#39;t it be nice if I could implement some kind of loop control stack that worked for <em>all</em> kinds of iteration?</p>

<p>The thing I built has blown me away with how flexible, composable, and useful it&#39;s turned out to be. It&#39;s <em>way</em> more powerful than I was expecting. And the code that leverages it is inevitably much simpler and easier to read. </p>

<h2 id="the-stacks">The Stacks</h2>

<p>I added <em>two</em> loop control stacks – what I call the <code>i-stack</code>, and the <code>next-stack</code>. The <code>i-stack</code> contains the current value(s) being iterated over, and is read from with the <code>i</code> and <code>j</code> words like normal. The <code>next-stack</code> is where the magic happens.</p>

<p>When iterating, the top value of the <code>next-stack</code> is a pointer to a small structure called an iterator. It&#39;s a very simple structure, only two cells. The first cell contains the execution token of a word that will either update the current values on the <code>i-stack</code> and return true, or remove its state from both stacks and return false. The second cell points to a cancellation function, that cleans up whatever state the iterator has kept on the two stacks without iterating further, and returns nothing.</p>

<h2 id="iterators">Iterators</h2>

<p>I built some simple helpers for creating iterators. It took a few tries to nail down this design, but I&#39;m happy with it now. <code>defiter</code> creates a “blank” iterator, which, when called, pushes itself to the <code>next-stack</code>. <code>:iter</code> does the same thing but allows you to write some code that accepts parameters and prepares the loop stacks first. <code>:next</code> defines a new anonymous “next-word” and assigns it to the most-recently defined iterator. <code>:cancel</code> does the same thing, but for cancellation.</p>

<p>On its own, this is already quite nice. I&#39;ve got a page or so of basically-trivial iterators. Here&#39;s one:</p>

<pre><code>:iter times ( n -- ) &gt;i ;
:next &lt;i dup 1- &gt;i finish? ;
:cancel idrop nextdrop ;
</code></pre>

<p><code>times</code> keeps its state in the <code>i-stack</code> – it initializes itself by pushing the number of times to repeat onto it. When fetching the next value, it pops the current value off the <code>i-stack</code>, decrements it, and pushes it back, leaving the old value on the data stack. <code>finish?</code> is a simple helper word that peeks at the top of the stack and runs the current cancellation function if it&#39;s false, or in this case, if we&#39;ve already hit 0. Since cleaning up after an iterator is often the same job whether you&#39;re exiting early or not, this word is very handy. Explicitly defining cancellation for this iterator isn&#39;t actually necessary in my current implementation, because <code>idrop nextdrop</code> is common enough that I use it as the default.</p>

<h2 id="each-next">each / next</h2>

<p>I can use these iteration words (within a compiled definition) like this:</p>

<pre><code>5 times each i . next
( outputs: 4 3 2 1 0 )
</code></pre>

<p>All the common loop types are easy to build in this system, as well as some uncommon ones:</p>

<pre><code>5 10 for each i . next ( outputs: 5 6 7 8 9 )
0 10 2 for+ each i . next ( outputs: 0 2 4 6 8 )
( pchars yields pointers to each byte in a zero-terminated string )
s&#34; hello&#34; pchars each i b@ emit next ( outputs: hello )
</code></pre>

<p>Generic cancellation, of course, allows us to trivially implement <code>break</code>; just cancel the iteration at the top of the stack, and then jump to the <code>each</code> loop exit point, after <code>next</code>. <code>continue</code> is even simpler, just jump back to the top of the loop.</p>

<pre><code>5 times each i 3 &lt; if break then i . next ( outputs: 4 3 )
5 times each i 2 % if continue then i . next ( outputs 4 2 0 )
</code></pre>

<p>Under the hood, <code>each</code> just calls the “next-word” of the iterator and jumps to the end of the loop if it returns 0 – conceptually identical to <code>begin iterate while</code>, with <code>next</code> meaning the same thing as <code>repeat</code>. This allows for iterators that return no values.</p>

<pre><code>0 times each i . next ( outputs: )
</code></pre>

<h2 id="generators">Generators</h2>

<p>That&#39;s <em>nice</em>, but it&#39;s not exactly setting the world on fire; it&#39;s a fair amount of work just to end up with a few different ways of writing “for” loops in practice, that Forth systems have had forever anyway. Is it really worth the cost of this abstraction?</p>

<p>Turns out, absolutely, yes, it is, because you can also build generators on it, and that blows things <em>wide</em> open.</p>

<p>First, a simple example:</p>

<pre><code>: 5-2-8 (( 5 yield 2 yield 8 yield )) ;
5-2-8 each i . next ( outputs: 5 2 8 )
</code></pre>

<p><code>((</code> defines the start of the generator, and <code>))</code> defines the end (and pushes it onto the <code>next-stack</code>). <em>Any valid Forth code goes in between</em>. <code>yield</code> takes the top of the stack, pushes it onto the <code>i-stack</code>, and then <em>suspends the generator</em> until the next iteration. How does this work? Essentially, <code>yield</code> takes the top of the return stack and pushes it onto the <code>next-stack</code>, then pushes an iterator that pops it off the <code>next-stack</code> and pushes it back onto the return stack. The details get a little messier in order to support some more advanced use cases, but that&#39;s the simple idea at the core of it.</p>

<p>OK, neat trick, we&#39;ve built ourselves a nice little coroutine-like system. But wait! It gets better! When <code>yield</code> resumes, it immediately removes all of its state from the iteration stacks. This means that <em>generators can safely interact with any iterator that might be “underneath” it</em>. They can iterate over things and yield in the middle! They can yield <em>different things</em> based on those values! We&#39;ve accidentally built an extremely powerful, totally generic map/filter capability!</p>

<pre><code>: doubled (( each i i + map next )) ;
5 times doubled each i . next ( outputs: 8 6 4 2 0 )
: odd (( each i 2 % filter next )) ;
5 times odd each i . next ( outputs: 3 1 )
</code></pre>

<p><code>map</code> and <code>filter</code> are more <code>yield</code>-like words – it turns out that there&#39;s a number of these that you might want to implement, with different logic for suspending, resuming, and cancelling. <code>map</code> saves the top of the <code>i-stack</code> onto the <code>next-stack</code> and replaces it with the input, restoring the original value after resuming (necessary since the iterator underneath might be using that value as its state). <code>filter</code> conditionally suspends based on the top of the data stack but otherwise doesn&#39;t touch the <code>i-stack</code>, leaving whatever iterator is running underneath to provide the value. Both of these words push iterators with special <code>cancel</code> logic that knows that there is another iterator underneath, and can <code>cancel</code> again recursively once they&#39;ve cleaned themselves up.</p>

<h2 id="generator-state">Generator state</h2>

<p>This design can <em>almost</em> be made to work for generators that have extra state, but it&#39;s awkward and incomplete. You must ensure the data stack is clean whenever you yield, so you&#39;re forced to manually shuffle data to and from the next stack. Consider a filter that only returns values that are divisible by a certain number:</p>

<pre><code>: divisible-by ( n -- ) &gt;next 
  (( &lt;next each i over % 0 = swap &gt;next filter &lt;next next drop )) ;
5 divisible-by 21 times each i . next ( ouputs: 20 15 10 5 0 )
</code></pre>

<p>This works, but there&#39;s so much stack noise! And it breaks down if you need to cancel, because <code>filter</code> has no idea that there&#39;s extra stuff on the <code>next-stack</code> that it needs to clear. Ideally there would be some automatic way of keeping the state of the generator on the data stack while it&#39;s running, and push it safely away when we suspend. Could there be some way to write <code>divisible-by</code> like this?</p>

<pre><code>: divisible-by ( n -- ) &gt;arg (( each i over % 0 = filter next drop )) ;
</code></pre>

<p>In fact, this code works in my implementation. The scheme to make this happen is a little bit subtle, but it can be done efficiently with a minimum of bookkeeping noise in most cases. I define a variable, <code>gen-arg-count</code>, that starts at zero. <code>&gt;arg</code> is an immediate word that compiles a call to <code>&gt;next</code> and increments that variable. Then, any time I compile a yielding word, I append the value of <code>gen-arg-count</code> to the instruction stream – much like <code>lit</code>. When suspending, the yielding word reads that value out of the instruction stream and transfers that many values from the data stack to the <code>next-stack</code>. Then it moves the pointer to the instruction stream from the return stack to the <code>next-stack</code>, and finally pushes the yielding iterator. That iterator then pulls the instruction pointer back off the <code>next-stack</code> to determine how many values to move from the <code>next-stack</code> back onto the data stack, as well as where to resume the instruction stream. Cancellation similarly can read the <code>arg-count</code> byte to know how many extra values to drop from the <code>next-stack</code>.</p>

<p>Generators need to ensure the data stack is empty before exiting at <code>))</code>. At one point I considered having <code>))</code> compile the appropriate number of <code>drop</code> calls automatically, but in the end I decided that it&#39;s reasonable and idiomatic to expect a generator to exit with a clean stack, like any other Forth word would.</p>

<p>With this extension, it&#39;s trivial to write all kinds of new iterators – we could even do away with the base iterator system entirely and just express everything as generators. There are lots nice one-line definitions of <code>times</code>:</p>

<pre><code>( 1 ) : times ( n -- ) &gt;arg (( begin dup while 1- dup yield repeat drop )) ;
( 2 ) : times ( n -- ) &gt;next (( &lt;next begin dup while 1- yield&gt; repeat drop )) ;
( 3 ) : times ( n -- ) &gt;arg (( -arg begin dup while 1- yield&gt; repeat drop )) ;
( 4 ) ( suspend ) &#39; noop ( resume ) &#39; noop ( cancel ) &#39; idrop :yield iyield
: times ( n -- ) &gt;i (( begin i while &lt;i 1- &gt;i iyield repeat idrop )) ;
</code></pre>

<p>Definition 1 doesn&#39;t use anything I haven&#39;t already explained. The state of the iterator is managed on the data stack, and automatically shuffled back and forth from the <code>next-stack</code> by <code>yield</code>.</p>

<p>Definition 2 adds a new word. <code>yield&gt;</code> is a yielder that moves the yielded value from the <code>i-stack</code> back onto the data stack when it resumes, instead of dropping it. The state of the iterator starts on the <code>next-stack</code> but is moved to the <code>i-stack</code> once the iteration loop actually starts.</p>

<p>Definition 3 is virtually the same as 2, but demonstrates the ability to handle changes in the amount of state. <code>-arg</code> is an immediate word that generates no code, but decrements <code>gen-arg-count</code> so that you can express that you&#39;ve consumed the argument and the next yield should preserve one less value on the data stack. (<code>+arg</code> is also defined, performing an increment, in case you generate more values on the stack than you started with.)</p>

<p>Definition 4 is built to keep all state on the <code>i-stack</code> from the beginning. Here we use <code>:yield</code> to define a new yielding word. I realized I hadn&#39;t built a yielder that left the <code>i-stack</code> alone when resuming, but would drop the value when cancelling, so I added one.</p>

<p>All of these options will correctly be cancelled if the code iterating over it calls <code>break</code>, with no special effort!</p>

<h2 id="final-thoughts">Final thoughts</h2>

<p>With this scheme, generators always take up at least two spaces on the <code>next-stack</code> – one for the yielder&#39;s iterator, and one for the resume point. But if <em>all</em> iterators were defined as generators, and all yielding words had to be defined with <code>:yield</code> to ensure a uniform structure, we could just push the resume point. <code>iterate</code> and <code>cancel</code> could easily find the appropriate function pointer by looking next to the resume point for the address of the yielder and digging inside. I think this could be built in such a way that it would be basically as efficient as the existing scheme, at the cost of making the whole thing more complex to explain. It might be worth pursuing, because generators are so pleasant to read and write, and raw iterators are... less so. I basically never want to write a raw iterator besides the very basic ones that are built-in.</p>

<p>All <a href="https://git.information-superhighway.net/SpindleyQ/dialer">the source for my Forth system</a> is available online; the iteration system is defined in <a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/branch/main/iter.jrt"><code>iter.jrt</code></a>. There are some interesting examples of generators in <a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/branch/main/embed.jrt"><code>embed.jrt</code></a>, <a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/branch/main/dialer.jrt"><code>dialer.jrt</code></a> and <a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/branch/main/rick.jrt"><code>rick.jrt</code></a> – some highlights:</p>
<ul><li><a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/commit/17ae93540901e7c4259aa81463fa540dc36769ad/embed.jrt#L47"><code>rle-decode</code></a> – takes a pointer to some run-length encoded packed data, yields a stream of values. Uses the <code>times</code> iterator internally to count off the repeated values.</li>
<li><a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/commit/17ae93540901e7c4259aa81463fa540dc36769ad/dialer.jrt#L222"><code>menu-options</code></a> – Provides a dynamic list of items to display in a menu. Yields 2 values at a time – the text to display, and the function to execute when the user selects it.</li>
<li><a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/commit/17ae93540901e7c4259aa81463fa540dc36769ad/dialer.jrt#L111"><code>xmit-iter</code></a> – Writes text to the screen with a small delay between each character, to simulate a slow serial connection. An extremely simple loop that can be driven by complex generation logic – including <a href="https://git.information-superhighway.net/SpindleyQ/dialer/src/commit/17ae93540901e7c4259aa81463fa540dc36769ad/dialer.jrt#L211">streaming RLE-encoded data with embedded colour information</a>.</li></ul>

<p><a href="https://blog.information-superhighway.net/tag:forth" class="hashtag"><span>#</span><span class="p-category">forth</span></a> <a href="https://blog.information-superhighway.net/tag:code" class="hashtag"><span>#</span><span class="p-category">code</span></a> <a href="https://blog.information-superhighway.net/tag:essays" class="hashtag"><span>#</span><span class="p-category">essays</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/a-forth-vocabulary-for-iteration</guid>
      <pubDate>Wed, 08 Nov 2023 14:35:58 +0000</pubDate>
    </item>
    <item>
      <title>Forth: The local variable question</title>
      <link>https://blog.information-superhighway.net/forth-the-local-variable-question</link>
      <description>&lt;![CDATA[I fairly frequently see people who are taking an interest in Forth struggle with the idea of programming without local variables. I struggled with it when I started writing Forth! I feel like there&#39;s an unspoken assumption for people coming to Forth from other languages, and if I were to speak it aloud, it would sound something like &#34;temporary data should go on the stack&#34;.&#xA;&#xA;Because... functions should be re-entrant by default! They should clean up after themselves! Global variables are bad and must be avoided at all costs! Functions should be &#34;pure&#34; and take all of their inputs as parameters, avoiding hidden dependencies!&#xA;&#xA;All of these ideas of what &#34;good code&#34; looks like are wrong in Forth.&#xA;&#xA;It is actually extremely common for Forth words to rely on implicit context, which is globally accessible through other Forth words. This is often how you build DSLs! !--more--&#xA;&#xA;Perhaps you are familiar with the JavaScript canvas API. It&#39;s based on PostScript, as are most vector drawing APIs, and PostScript, as you may know, is a Forth-like postfix language for printed graphics. The canvas API has a bunch of implicit state. When you draw a rectangle, for example, you pass in just the position and size. If you want to specify properties like the fill colour, stroke colour, stroke width, line cap style, and on and on and on, you call setter methods before calling the draw function. If you want to preserve the previous canvas state and return to it when you&#39;re done, you can explicitly push it onto a stack.&#xA;&#xA;This is one secret sauce to writing small Forth words - you build little vocabularies that all work with some kernel of shared state.&#xA;&#xA;Let&#39;s implement Bresenham&#39;s line algorithm&#xA;&#xA;I had the idea to implement an algorithm where juggling all of the state on the stack would be a nightmare, to show an example of what this looks like in practice. I&#39;ve always found Bresenham&#39;s line-drawing algorithm kind of awkward - most implementations in C switch between several nearly-identical code blocks depending on how steep the line is. But the core idea is actually very simple, and the awkward near-duplication of the standard C implementation does not have to be reproduced in Forth.&#xA;&#xA;First we will define a simple textual canvas vocabulary:&#xA;&#xA;80 CONSTANT SCREEN-W &#xA;24 CONSTANT SCREEN-H&#xA;CREATE SCREEN SCREEN-W SCREEN-H  ALLOT&#xA;CREATE SCREEN-BRUSH KEY + C,&#xA;&#xA;: SET-BRUSH ( -- ) KEY SCREEN-BRUSH C! ;&#xA;: FILL-SCREEN ( -- ) SCREEN-W SCREEN-H  SCREEN + SCREEN DO I SCREEN-BRUSH C@ SWAP C! LOOP ;&#xA;: SCREEN-XY ( x y -- ptr ) SCREEN-W  + SCREEN + ;&#xA;: PLOT-XY ( x y -- ) SCREEN-XY SCREEN-BRUSH C@ SWAP C! ;&#xA;: PRINT-ROW ( y -- ) 0 SWAP SCREEN-XY SCREEN-W TYPE ;&#xA;: PRINT-SCREEN SCREEN-H 0 DO I PRINT-ROW CR LOOP ;&#xA;&#xA;This is ANS Forth - my personal Forths have all been lowercase, I don&#39;t usually like all the shouting.&#xA;&#xA;This creates a buffer called SCREEN that is 80 columns wide by 24 rows tall. It also defines the concept of a brush, which is just an ASCII character that is put into this buffer by PLOT-XY. Our line-drawing routine will use PLOT-XY to put &#34;pixels&#34; on the &#34;screen&#34; without caring about what they look like. Kind of a canvassy idea.&#xA;&#xA;Now let&#39;s clear the screen:&#xA;&#xA;SET-BRUSH +&#xA;FILL-SCREEN &#xA;SET-BRUSH $&#xA;&#xA;I use the + character for &#34;off&#34; and the $ character for &#34;on&#34; because they were about the same width in the variable-width font that my browser picked when plugging this code into jsForth. The trick where SET-BRUSH reads the next character in the code directly is cute but brittle; it only works interactively and will break weirdly in a : definition. WAForth can&#39;t handle it at all, it pops up a dialog box asking for you to type a character. Feel free to use 43 SCREEN-BRUSH C! to draw with + and 36 SCREEN-BRUSH C! to draw with $ if you want to follow along in WAForth. Define little helper words for them even, like BRUSH-+ and BRUSH-$. It&#39;s not a big problem, don&#39;t overthink it, but do make yourself comfortable.&#xA;&#xA;An aside: How to draw a line&#xA;&#xA;So let&#39;s talk for a minute about how Bresenham&#39;s line-drawing algorithm works. The Wikipedia article has a bunch of math and symbols but at its core it&#39;s really very simple. Start with a specific kind of line, that slopes upwards and to the right, but not steeper than 45 degrees.&#xA;&#xA;Start at the bottom-left side of the line. Draw that pixel.&#xA;Move your X coordinate one to the right. Now you need to decide if the Y coordinate needs to move up one or stay where it is.&#xA;To do that, you keep track of a subpixel fraction; ie. you start in the middle of a pixel (0.5), and increment it by the amount that the line has risen over the last pixel: (y2-y1)/(x2-x1) or dy/dx.&#xA;If the fraction is   1, move Y up one pixel and subtract 1 from the fraction; the fraction value is now somewhere within the bottom half of the next highest pixel.&#xA;Now draw the next pixel and go back to step 2 until you end up at the top-right end of the line.&#xA;&#xA;This is very simple! We then layer on just a few simple tricks:&#xA;&#xA;Instead of always moving along the X axis, for lines that are taller than they are long, we need to move along the Y axis. To do this we simply always move in the direction of the longer side, and run the decision logic along the shorter axis. This way the slope is never steeper than 45 degrees.&#xA;If, for example, the line slopes down instead of up, when we decide whether to move along the Y axis, we need to move down one pixel instead of up. We can handle this by simply incrementing instead of decrementing along the appropriate axis.&#xA;In the olden days, floating point numbers were very slow and integers were fast. Since the &#34;error&#34; value (really a fractional pixel location, but everyone calls it &#34;error&#34;) always has the same denominator, and we don&#39;t do anything more complicated than adding more fractions with the same denominator to it, we can just keep the denominator implicit and store the numerator in an integer. We choose 2  dx (when x is the long axis) as the denominator so that we can easily start exactly on a half pixel (ie. our starting value is dx/2dx, and we increment by 2  dy every step). It doesn&#39;t actually make a huge amount of difference what you use for a starting value though, as long as it&#39;s smaller than your implicit denominator then you&#39;ll end up with a line that starts and ends where you expect.&#xA;&#xA;That&#39;s it! That&#39;s the whole thing.&#xA;&#xA;Now back to writing Forth&#xA;&#xA;So, first off, let&#39;s define the state that we&#39;ll need. Starting and ending X and Y coordinates, the current X and Y coordinates, and the fractional &#34;error&#34; value. Definitely need to remember all that.&#xA;&#xA;VARIABLE LINE-X1 VARIABLE LINE-Y1 &#xA;VARIABLE LINE-X2 VARIABLE LINE-Y2&#xA;VARIABLE LINE-X  VARIABLE LINE-Y  VARIABLE LINE-ERR&#xA;&#xA;Now we can start defining helper words. Let&#39;s write a couple of words to figure out the length of the line along each axis:&#xA;&#xA;: LINE-DX ( -- dx ) LINE-X2 @ LINE-X1 @ - ;&#xA;: LINE-DY ( -- dy ) LINE-Y2 @ LINE-Y1 @ - ;&#xA;&#xA;No sweat; just take x2 - x1 or y2 - y1. How about some words to decide which axis is longer, and what direction each axis is moving in?&#xA;&#xA;: X-LONGER? ( -- f ) LINE-DX ABS LINE-DY ABS   ;&#xA;: LINE-LEFT? ( -- f ) LINE-DX 0 &lt; ;&#xA;: LINE-UP? ( -- f ) LINE-DY 0 &lt; ;&#xA;&#xA;Even if you&#39;re not well-practiced reading postfix, I hope it&#39;s pretty clear what these are doing.&#xA;&#xA;Now let&#39;s define some words for incrementing or decrementing, depending on which direction the line is going:&#xA;&#xA;: LINE-XINC ( x -- x ) LINE-LEFT? IF 1- ELSE 1+ THEN ;&#xA;: LINE-YINC ( y -- y ) LINE-UP? IF 1- ELSE 1+ THEN ;&#xA;: LINE-INC ( x|y x? -- x|y ) IF LINE-XINC ELSE LINE-YINC THEN ;&#xA;&#xA;LINE-INC is our first and only word to take two values on the stack - the top is a boolean that determines if we&#39;re talking about the X or Y axis. We will soon use it in conjunction with X-LONGER? to abstract away incrementing the &#34;long&#34;&#xA;vs. &#34;short&#34; axis.&#xA;&#xA;: LINE-LONG ( -- p ) X-LONGER? IF LINE-X ELSE LINE-Y THEN ;&#xA;: LINE-SHORT ( -- p ) X-LONGER? 0= IF LINE-X ELSE LINE-Y THEN ;&#xA;: LINE-LONG-INC! ( -- ) LINE-LONG @ X-LONGER? LINE-INC LINE-LONG ! ;&#xA;: LINE-SHORT-INC! ( -- ) LINE-SHORT @ X-LONGER? 0= LINE-INC LINE-SHORT ! ;&#xA;&#xA;LINE-LONG-INC! is a little tricky, so let&#39;s walk through it:&#xA;&#xA;LINE-LONG returns a pointer to either the LINE-X or LINE-Y variable. &#xA;@ fetches the current coordinate along the long axis. &#xA;X-LONGER? pushes &#34;true&#34; onto the stack if X is the long axis (and thus the X coordinate is what&#39;s on the stack)&#xA;LINE-INC calls LINE-XINC if X is long, or LINE-YINC if Y is long. This increments or decrements the value, depending on the direction of the line. The new coordinate is the one value left on the stack.&#xA;LINE-LONG ! fetches the appropriate pointer again and stores the new value.&#xA;&#xA;LINE-SHORT-INC! is basically the same, except with an 0= in there as a &#34;logical not&#34; for X-LONGER?. (It didn&#39;t quite seem worthwhile to define Y-LONGER? on its own.)&#xA;&#xA;Now let&#39;s define some useful words for the error / fractional pixel calculation:&#xA;&#xA;: LINE-LONG-LEN ( -- l ) X-LONGER? IF LINE-DX ELSE LINE-DY THEN ABS ;&#xA;: LINE-SHORT-LEN ( -- l ) X-LONGER? IF LINE-DY ELSE LINE-DX THEN ABS ;&#xA;: LINE-LONG-ERR ( -- err ) LINE-LONG-LEN 2  ;&#xA;: LINE-SHORT-ERR ( -- err ) LINE-SHORT-LEN 2  ;&#xA;: LINE-INIT-ERR! ( -- ) LINE-LONG-LEN LINE-ERR ! ;&#xA;: LINE-ERR-ACC ( -- err ) LINE-ERR @ LINE-SHORT-ERR + ;&#xA;&#xA;LINE-INIT-ERR! defines the initial error value as half a pixel (with LINE-LONG-ERR being the implicit denominator). LINE-ERR-ACC fetches the current error and adds the appropriate fraction along the short axis, leaving the new value on the stack.&#xA;&#xA;: LINE-ERR-INC! ( err -- err ) DUP LINE-LONG-ERR   = IF LINE-LONG-ERR - LINE-SHORT-INC! THEN ;&#xA;: LINE-ERR-ACC! ( -- ) LINE-ERR-ACC LINE-ERR-INC! LINE-ERR ! ;&#xA;: LINE-STEP ( -- ) LINE-LONG-INC! LINE-ERR-ACC! ;&#xA;&#xA;LINE-ERR-INC! takes the incremented error value, determines if we&#39;ve overflown the fraction into the next pixel, and if so, decrements the error value and increments the coordinate along the short axis. The updated error value is left on the stack. This is the only place in the algorithm where I chose to use a stack-manipulation word.* I could have gotten by without it by just calling LINE-ERR-ACC a couple of times, but it would have made the definition longer and arguably harder to follow.&#xA;&#xA;LINE-ERR-ACC! handles accumulating the error, incrementing the short axis if necessary, and storing the new error. Finally, LINE-STEP puts all the core logic together - increment along the long axis, then decide whether we need to increment along the short axis.&#xA;&#xA;All that&#39;s left is to run it in a loop:&#xA;&#xA;: PLOT-LINE-STEP ( -- ) LINE-X @ LINE-Y @ PLOT-XY ;&#xA;: DO-LINE ( -- ) LINE-INIT-ERR! LINE-LONG-LEN 0 DO PLOT-LINE-STEP LINE-STEP LOOP PLOT-LINE-STEP ;&#xA;&#xA;: LINE ( x1 y1 x2 y2 -- ) &#xA;  LINE-Y2 ! LINE-X2 ! DUP LINE-Y ! LINE-Y1 ! DUP LINE-X ! LINE-X1 ! DO-LINE ;&#xA;&#xA;The final definition of LINE takes four values on the stack and immediately puts them into variables that are used by all the other words.&#xA;&#xA;IMO, this is what Forth enthusiasts mean when they say things like &#34;write lots of small definitions&#34;, or &#34;the stack shouldn&#39;t need to be very deep&#34;, or &#34;you don&#39;t need local variables&#34;. There are 24 one line function definitions up there. No individual definition is particularly complicated or hard to read. We do virtually no stack manipulation.&#xA;&#xA;Let&#39;s see it in action!&#xA;&#xA;0 0 0 15 LINE&#xA;0 0 15 15 LINE&#xA;30 15 0 0 LINE&#xA;60 15 0 0 LINE&#xA;79 7 0 0 LINE&#xA;79 7 60 15 LINE&#xA;0 15 60 15 LINE&#xA;&#xA;PRINT-SCREEN&#xA;$$$$$$++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;$$$$$$$$$$$$$$$$$+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;$+$+$$+$$$$++++++$$$$$$$$$$$$+++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;$++$++$$+++$$$$++++++++++++++$$$$$$$$$$$++++++++++++++++++++++++++++++++++++++++&#xA;$+++$+++$$+++++$$$$+++++++++++++++++++++$$$$$$$$$$$+++++++++++++++++++++++++++++&#xA;$++++$++++$$+++++++$$$$++++++++++++++++++++++++++++$$$$$$$$$$$$+++++++++++++++++&#xA;$+++++$+++++$$+++++++++$$$$++++++++++++++++++++++++++++++++++++$$$$$$$$$$$++++++&#xA;$++++++$++++++$$+++++++++++$$$$+++++++++++++++++++++++++++++++++++++++++++$$$$$$&#xA;$+++++++$+++++++$$+++++++++++++$$$$+++++++++++++++++++++++++++++++++++++++++$$++&#xA;$++++++++$++++++++$$+++++++++++++++$$$$+++++++++++++++++++++++++++++++++++$$++++&#xA;$+++++++++$+++++++++$$+++++++++++++++++$$$$++++++++++++++++++++++++++++$$$++++++&#xA;$++++++++++$++++++++++$$+++++++++++++++++++$$$$++++++++++++++++++++++$$+++++++++&#xA;$+++++++++++$+++++++++++$$+++++++++++++++++++++$$$$+++++++++++++++$$$+++++++++++&#xA;$++++++++++++$++++++++++++$$+++++++++++++++++++++++$$$$+++++++++$$++++++++++++++&#xA;$+++++++++++++$+++++++++++++$$+++++++++++++++++++++++++$$$$+++$$++++++++++++++++&#xA;$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++&#xA;&#xA;Lovely!&#xA;&#xA;Now, there is plenty to criticize about this code. It does all kinds of redundant recalculation that in any sane C implementation would have been stashed away into a local, for example. But that&#39;s fixable with a little more effort; I might do another blog post where I apply some of Forth&#39;s fun metaprogramming tricks to that problem. &#xA;&#xA;#forth #essays #code]]&gt;</description>
      <content:encoded><![CDATA[<p>I fairly frequently see people who are taking an interest in Forth struggle with the idea of programming without local variables. I struggled with it when I started writing Forth! I feel like there&#39;s an unspoken assumption for people coming to Forth from other languages, and if I were to speak it aloud, it would sound something like “temporary data should go on the stack”.</p>

<p>Because... functions should be re-entrant by default! They should clean up after themselves! Global variables are bad and must be avoided at all costs! Functions should be “pure” and take all of their inputs as parameters, avoiding hidden dependencies!</p>

<p>All of these ideas of what “good code” looks like are wrong in Forth.</p>

<p>It is actually extremely common for Forth words to rely on implicit context, which is globally accessible through other Forth words. This is often how you build DSLs! </p>

<p>Perhaps you are familiar with the JavaScript <code>canvas</code> API. It&#39;s based on PostScript, as are most vector drawing APIs, and PostScript, as you may know, is a Forth-like postfix language for printed graphics. The <code>canvas</code> API has a <em>bunch</em> of implicit state. When you draw a rectangle, for example, you pass in just the position and size. If you want to specify properties like the fill colour, stroke colour, stroke width, line cap style, and on and on and on, you call setter methods before calling the draw function. If you want to preserve the previous canvas state and return to it when you&#39;re done, you can explicitly push it onto a stack.</p>

<p>This is one secret sauce to writing small Forth words – you build little vocabularies that all work with some kernel of shared state.</p>

<h2 id="let-s-implement-bresenham-s-line-algorithm">Let&#39;s implement Bresenham&#39;s line algorithm</h2>

<p>I had the idea to implement an algorithm where juggling all of the state on the stack would be a nightmare, to show an example of what this looks like in practice. I&#39;ve always found Bresenham&#39;s line-drawing algorithm kind of awkward – most implementations in C switch between several nearly-identical code blocks depending on how steep the line is. But the core idea is actually very simple, and the awkward near-duplication of the standard C implementation does not have to be reproduced in Forth.</p>

<p>First we will define a simple textual canvas vocabulary:</p>

<pre><code>80 CONSTANT SCREEN-W 
24 CONSTANT SCREEN-H
CREATE SCREEN SCREEN-W SCREEN-H * ALLOT
CREATE SCREEN-BRUSH KEY + C,

: SET-BRUSH ( -- ) KEY SCREEN-BRUSH C! ;
: FILL-SCREEN ( -- ) SCREEN-W SCREEN-H * SCREEN + SCREEN DO I SCREEN-BRUSH C@ SWAP C! LOOP ;
: SCREEN-XY ( x y -- ptr ) SCREEN-W * + SCREEN + ;
: PLOT-XY ( x y -- ) SCREEN-XY SCREEN-BRUSH C@ SWAP C! ;
: PRINT-ROW ( y -- ) 0 SWAP SCREEN-XY SCREEN-W TYPE ;
: PRINT-SCREEN SCREEN-H 0 DO I PRINT-ROW CR LOOP ;
</code></pre>

<p>This is ANS Forth – my personal Forths have all been lowercase, I don&#39;t usually like all the shouting.</p>

<p>This creates a buffer called <code>SCREEN</code> that is 80 columns wide by 24 rows tall. It also defines the concept of a brush, which is just an ASCII character that is put into this buffer by <code>PLOT-XY</code>. Our line-drawing routine will use <code>PLOT-XY</code> to put “pixels” on the “screen” without caring about what they look like. Kind of a canvassy idea.</p>

<p>Now let&#39;s clear the screen:</p>

<pre><code>SET-BRUSH +
FILL-SCREEN 
SET-BRUSH $
</code></pre>

<p>I use the <code>+</code> character for “off” and the <code>$</code> character for “on” because they were about the same width in the variable-width font that my browser picked when plugging this code into <a href="https://brendanator.github.io/jsForth/">jsForth</a>. The trick where <code>SET-BRUSH</code> reads the next character in the code directly is cute but brittle; it only works interactively and will break weirdly in a <code>:</code> definition. <a href="https://el-tramo.be/waforth/">WAForth</a> can&#39;t handle it at all, it pops up a dialog box asking for you to type a character. Feel free to use <code>43 SCREEN-BRUSH C!</code> to draw with <code>+</code> and <code>36 SCREEN-BRUSH C!</code> to draw with <code>$</code> if you want to follow along in WAForth. Define little helper words for them even, like <code>BRUSH-+</code> and <code>BRUSH-$</code>. It&#39;s not a big problem, don&#39;t overthink it, but do make yourself comfortable.</p>

<h3 id="an-aside-how-to-draw-a-line">An aside: How to draw a line</h3>

<p>So let&#39;s talk for a minute about how Bresenham&#39;s line-drawing algorithm works. The Wikipedia article has a bunch of math and symbols but at its core it&#39;s really very simple. Start with a specific kind of line, that slopes upwards and to the right, but not steeper than 45 degrees.</p>
<ol><li>Start at the bottom-left side of the line. Draw that pixel.</li>
<li>Move your X coordinate one to the right. Now you need to decide if the Y coordinate needs to move up one or stay where it is.</li>
<li>To do that, you keep track of a subpixel fraction; ie. you start in the middle of a pixel (0.5), and increment it by the amount that the line has risen over the last pixel: (y2-y1)/(x2-x1) or dy/dx.</li>
<li>If the fraction is &gt;1, move Y up one pixel and subtract 1 from the fraction; the fraction value is now somewhere within the bottom half of the next highest pixel.</li>
<li>Now draw the next pixel and go back to step 2 until you end up at the top-right end of the line.</li></ol>

<p>This is very simple! We then layer on just a few simple tricks:</p>
<ul><li>Instead of always moving along the X axis, for lines that are taller than they are long, we need to move along the Y axis. To do this we simply always move in the direction of the longer side, and run the decision logic along the shorter axis. This way the slope is never steeper than 45 degrees.</li>
<li>If, for example, the line slopes down instead of up, when we decide whether to move along the Y axis, we need to move down one pixel instead of up. We can handle this by simply incrementing instead of decrementing along the appropriate axis.</li>
<li>In the olden days, floating point numbers were very slow and integers were fast. Since the “error” value (really a fractional pixel location, but everyone calls it “error”) always has the same denominator, and we don&#39;t do anything more complicated than adding more fractions with the same denominator to it, we can just keep the denominator implicit and store the numerator in an integer. We choose <code>2 * dx</code> (when x is the long axis) as the denominator so that we can easily start exactly on a half pixel (ie. our starting value is <code>dx/2dx</code>, and we increment by <code>2 * dy</code> every step). It doesn&#39;t actually make a huge amount of difference what you use for a starting value though, as long as it&#39;s smaller than your implicit denominator then you&#39;ll end up with a line that starts and ends where you expect.</li></ul>

<p>That&#39;s it! That&#39;s the whole thing.</p>

<h3 id="now-back-to-writing-forth">Now back to writing Forth</h3>

<p>So, first off, let&#39;s define the state that we&#39;ll need. Starting and ending X and Y coordinates, the current X and Y coordinates, and the fractional “error” value. Definitely need to remember all that.</p>

<pre><code>VARIABLE LINE-X1 VARIABLE LINE-Y1 
VARIABLE LINE-X2 VARIABLE LINE-Y2
VARIABLE LINE-X  VARIABLE LINE-Y  VARIABLE LINE-ERR
</code></pre>

<p>Now we can start defining helper words. Let&#39;s write a couple of words to figure out the length of the line along each axis:</p>

<pre><code>: LINE-DX ( -- dx ) LINE-X2 @ LINE-X1 @ - ;
: LINE-DY ( -- dy ) LINE-Y2 @ LINE-Y1 @ - ;
</code></pre>

<p>No sweat; just take <code>x2 - x1</code> or <code>y2 - y1</code>. How about some words to decide which axis is longer, and what direction each axis is moving in?</p>

<pre><code>: X-LONGER? ( -- f ) LINE-DX ABS LINE-DY ABS &gt; ;
: LINE-LEFT? ( -- f ) LINE-DX 0 &lt; ;
: LINE-UP? ( -- f ) LINE-DY 0 &lt; ;
</code></pre>

<p>Even if you&#39;re not well-practiced reading postfix, I hope it&#39;s pretty clear what these are doing.</p>

<p>Now let&#39;s define some words for incrementing or decrementing, depending on which direction the line is going:</p>

<pre><code>: LINE-XINC ( x -- x ) LINE-LEFT? IF 1- ELSE 1+ THEN ;
: LINE-YINC ( y -- y ) LINE-UP? IF 1- ELSE 1+ THEN ;
: LINE-INC ( x|y x? -- x|y ) IF LINE-XINC ELSE LINE-YINC THEN ;
</code></pre>

<p><code>LINE-INC</code> is our first and only word to take <em>two</em> values on the stack – the top is a boolean that determines if we&#39;re talking about the X or Y axis. We will soon use it in conjunction with <code>X-LONGER?</code> to abstract away incrementing the “long”
vs. “short” axis.</p>

<pre><code>: LINE-LONG ( -- p ) X-LONGER? IF LINE-X ELSE LINE-Y THEN ;
: LINE-SHORT ( -- p ) X-LONGER? 0= IF LINE-X ELSE LINE-Y THEN ;
: LINE-LONG-INC! ( -- ) LINE-LONG @ X-LONGER? LINE-INC LINE-LONG ! ;
: LINE-SHORT-INC! ( -- ) LINE-SHORT @ X-LONGER? 0= LINE-INC LINE-SHORT ! ;
</code></pre>

<p><code>LINE-LONG-INC!</code> is a little tricky, so let&#39;s walk through it:</p>
<ul><li><code>LINE-LONG</code> returns a pointer to either the <code>LINE-X</code> or <code>LINE-Y</code> variable.</li>
<li><code>@</code> fetches the current coordinate along the long axis.</li>
<li><code>X-LONGER?</code> pushes “true” onto the stack if X is the long axis (and thus the X coordinate is what&#39;s on the stack)</li>
<li><code>LINE-INC</code> calls <code>LINE-XINC</code> if X is long, or <code>LINE-YINC</code> if Y is long. This increments or decrements the value, depending on the direction of the line. The new coordinate is the one value left on the stack.</li>
<li><code>LINE-LONG !</code> fetches the appropriate pointer again and stores the new value.</li></ul>

<p><code>LINE-SHORT-INC!</code> is basically the same, except with an <code>0=</code> in there as a “logical not” for <code>X-LONGER?</code>. (It didn&#39;t quite seem worthwhile to define <code>Y-LONGER?</code> on its own.)</p>

<p>Now let&#39;s define some useful words for the error / fractional pixel calculation:</p>

<pre><code>: LINE-LONG-LEN ( -- l ) X-LONGER? IF LINE-DX ELSE LINE-DY THEN ABS ;
: LINE-SHORT-LEN ( -- l ) X-LONGER? IF LINE-DY ELSE LINE-DX THEN ABS ;
: LINE-LONG-ERR ( -- err ) LINE-LONG-LEN 2 * ;
: LINE-SHORT-ERR ( -- err ) LINE-SHORT-LEN 2 * ;
: LINE-INIT-ERR! ( -- ) LINE-LONG-LEN LINE-ERR ! ;
: LINE-ERR-ACC ( -- err ) LINE-ERR @ LINE-SHORT-ERR + ;
</code></pre>

<p><code>LINE-INIT-ERR!</code> defines the initial error value as half a pixel (with <code>LINE-LONG-ERR</code> being the implicit denominator). <code>LINE-ERR-ACC</code> fetches the current error and adds the appropriate fraction along the short axis, leaving the new value on the stack.</p>

<pre><code>: LINE-ERR-INC! ( err -- err ) DUP LINE-LONG-ERR &gt;= IF LINE-LONG-ERR - LINE-SHORT-INC! THEN ;
: LINE-ERR-ACC! ( -- ) LINE-ERR-ACC LINE-ERR-INC! LINE-ERR ! ;
: LINE-STEP ( -- ) LINE-LONG-INC! LINE-ERR-ACC! ;
</code></pre>

<p><code>LINE-ERR-INC!</code> takes the incremented error value, determines if we&#39;ve overflown the fraction into the next pixel, and if so, decrements the error value and increments the coordinate along the short axis. The updated error value is left on the stack. <em>This is the only place in the algorithm where I chose to use a stack-manipulation word.</em> I could have gotten by without it by just calling <code>LINE-ERR-ACC</code> a couple of times, but it would have made the definition longer and arguably harder to follow.</p>

<p><code>LINE-ERR-ACC!</code> handles accumulating the error, incrementing the short axis if necessary, and storing the new error. Finally, <code>LINE-STEP</code> puts all the core logic together – increment along the long axis, then decide whether we need to increment along the short axis.</p>

<p>All that&#39;s left is to run it in a loop:</p>

<pre><code>: PLOT-LINE-STEP ( -- ) LINE-X @ LINE-Y @ PLOT-XY ;
: DO-LINE ( -- ) LINE-INIT-ERR! LINE-LONG-LEN 0 DO PLOT-LINE-STEP LINE-STEP LOOP PLOT-LINE-STEP ;

: LINE ( x1 y1 x2 y2 -- ) 
  LINE-Y2 ! LINE-X2 ! DUP LINE-Y ! LINE-Y1 ! DUP LINE-X ! LINE-X1 ! DO-LINE ;
</code></pre>

<p>The final definition of <code>LINE</code> takes four values on the stack and immediately puts them into variables that are used by all the other words.</p>

<p>IMO, this is what Forth enthusiasts mean when they say things like “write lots of small definitions”, or “the stack shouldn&#39;t need to be very deep”, or “you don&#39;t need local variables”. There are <em>24</em> one line function definitions up there. No individual definition is particularly complicated or hard to read. We do virtually no stack manipulation.</p>

<p>Let&#39;s see it in action!</p>

<pre><code>0 0 0 15 LINE
0 0 15 15 LINE
30 15 0 0 LINE
60 15 0 0 LINE
79 7 0 0 LINE
79 7 60 15 LINE
0 15 60 15 LINE

PRINT-SCREEN
</code></pre>

<pre><code>$$$$$$++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
$$$$$$$$$$$$$$$$$+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
$+$+$$+$$$$++++++$$$$$$$$$$$$+++++++++++++++++++++++++++++++++++++++++++++++++++
$++$++$$+++$$$$++++++++++++++$$$$$$$$$$$++++++++++++++++++++++++++++++++++++++++
$+++$+++$$+++++$$$$+++++++++++++++++++++$$$$$$$$$$$+++++++++++++++++++++++++++++
$++++$++++$$+++++++$$$$++++++++++++++++++++++++++++$$$$$$$$$$$$+++++++++++++++++
$+++++$+++++$$+++++++++$$$$++++++++++++++++++++++++++++++++++++$$$$$$$$$$$++++++
$++++++$++++++$$+++++++++++$$$$+++++++++++++++++++++++++++++++++++++++++++$$$$$$
$+++++++$+++++++$$+++++++++++++$$$$+++++++++++++++++++++++++++++++++++++++++$$++
$++++++++$++++++++$$+++++++++++++++$$$$+++++++++++++++++++++++++++++++++++$$++++
$+++++++++$+++++++++$$+++++++++++++++++$$$$++++++++++++++++++++++++++++$$$++++++
$++++++++++$++++++++++$$+++++++++++++++++++$$$$++++++++++++++++++++++$$+++++++++
$+++++++++++$+++++++++++$$+++++++++++++++++++++$$$$+++++++++++++++$$$+++++++++++
$++++++++++++$++++++++++++$$+++++++++++++++++++++++$$$$+++++++++$$++++++++++++++
$+++++++++++++$+++++++++++++$$+++++++++++++++++++++++++$$$$+++$$++++++++++++++++
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
</code></pre>

<p>Lovely!</p>

<p>Now, there is plenty to criticize about this code. It does all kinds of redundant recalculation that in any sane C implementation would have been stashed away into a local, for example. But that&#39;s fixable with a little more effort; I might do another blog post where I apply some of Forth&#39;s fun metaprogramming tricks to that problem.</p>

<p><a href="https://blog.information-superhighway.net/tag:forth" class="hashtag"><span>#</span><span class="p-category">forth</span></a> <a href="https://blog.information-superhighway.net/tag:essays" class="hashtag"><span>#</span><span class="p-category">essays</span></a> <a href="https://blog.information-superhighway.net/tag:code" class="hashtag"><span>#</span><span class="p-category">code</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/forth-the-local-variable-question</guid>
      <pubDate>Sat, 18 Feb 2023 00:41:54 +0000</pubDate>
    </item>
    <item>
      <title>Return to Monkey Island: That Ending</title>
      <link>https://blog.information-superhighway.net/return-to-monkey-island-that-ending</link>
      <description>&lt;![CDATA[I wrote this analysis of Return to Monkey Island&#39;s ending on October 4th, a couple of weeks after finishing it, in a kind of manic fugue state, writing well past the time I should have been in bed. After I&#39;d written it, I decided it probably needed an editing pass before I posted it, but life immediately got in the way and I never looked at it again. Today I picked it up and read through it and decided not a word needed to be changed. So, without further ado, here it is. &#xA;&#xA;HERE BE SPOILERS. You&#39;ve been warned.!--more--&#xA;&#xA;So, the ending. What does it mean?&#xA;&#xA;I am not interested in the question of &#34;what &#39;really&#39; happened&#34; - that&#39;s a boring question, and the game takes great pains to make clear that the answer is not important. What I am interested in is, &#34;what is the game trying to say?&#34;&#xA;&#xA;Return to Monkey Island touches on a lot of themes and asks a lot of questions, but I&#39;m going to focus on two:&#xA;&#xA;What is The Secret of Monkey Island?&#xA;What makes Guybrush any better than LeChuck?&#xA;&#xA;The game&#39;s answer to the first question is clear: it is not important. It could be anything. Feel free to imagine whatever makes you happiest. This is what every character in the game who speculates about The Secret does, and this is what the game invites the player to do at the end.&#xA;&#xA;The second question - the game doesn&#39;t call explicit attention to it very often, but it runs from the beginning of the game to the very end. The first real dialog tree of the game is Guybrush going through all the ways that he believes LeChuck has wronged him, while the lookout doesn&#39;t seem particularly convinced. You&#39;re both pirates, after all. He&#39;s after exactly the same thing as you. What makes you the hero and him the villain?&#xA;&#xA;  He&#39;s an evil zombie ghost pirate, terror of the seas and sometimes the land! A nefarious, conniving, murdering scallywag! A depraved, ruthless villain! And a loud talker! And I&#39;m... not those things!&#xA;&#xA;But. You take a spot on LeChuck&#39;s crew anyway. You have casual, low-stakes conversations with him. You fill out LeChuck&#39;s paperwork. He&#39;s... just a pirate, who wants exactly the same thing as you, for similarly vague reasons.&#xA;&#xA;On Monkey Island, you are forced to get even closer to LeChuck in order to break the magical voodoo lock on the map to The Secret - learning his favourite food, reading his diary, humming his theme song. In the end you&#39;re both double-crossed by Captain Madison and the new pirate leaders.&#xA;&#xA;As you search for the golden keys in Part IV the game starts to make it really obvious that Guybrush&#39;s single-minded pursuit of The Secret is starting to have negative consequences on the people around him. Scenes that were played for laughs get re-contextualized as Elaine learns more about Guybrush has been up to. It&#39;s revealed that Guybrush even directly fucked up the project Elaine has been working tirelessly at for... however long it takes to turn an entire island into a grove of lime trees. A hell of a lot longer than Guybrush has been seeking out The Secret.&#xA;&#xA;There is also apparently a subtle thing where the pamphlet boasting of LeChuck&#39;s accomplishment fills in as Guybrush does the same nasty stuff. The bingo card doesn&#39;t quite fill up - Guybrush can&#39;t murder anyone who gets in his way - but the game wants the player to pause and realize that Guybrush really can be quite the scoundrel. After all... he&#39;s a pirate.&#xA;&#xA;Which leads us, finally, to the ending. After chasing LeChuck through an underground puzzle gauntlet back on Monkey Island once more, Guybrush walks through a door and ends up... on Amusement Park Melee Island. Stan appears and tells Guybrush it&#39;s closing time; Elaine asks if he&#39;s ready to go. An animatronic LeChuck and an animatronic Captain Lila are torturing an animatronic Locke Smith, in an endless loop, demanding the key that opens the chest that contains The Secret. There&#39;s also animatronic Otis, an animatronic Old Pirate Leader, an animatronic Widey Bones - everyone you&#39;d expect to meet on High Street is back, but as a cardboard cutout version of themselves.&#xA;&#xA;From here you can pretty much pick your ending; they all end up essentially the same, with unique little 5-second videos that play after the (unskippable!) credits roll. There are 11 of these in total. YouTube is there and it doesn&#39;t take long, but you aren&#39;t really meant to watch them all - you&#39;re meant to pick the option that&#39;s most meaningful to you and that becomes the conclusion to your Monkey Island story.&#xA;&#xA;In some ways this is a nice way to acknowledge that Monkey Island is many things to many people. But... if that&#39;s all it was, there are many more narratively satisfying ways to write it. You could have had Boybrush interrupt Guybrush right before he opens the chest and let the player pick what&#39;s inside as it&#39;s revealed, for example. Instead it&#39;s a t-shirt, and it&#39;s only after Boybrush complains that the player can make something else up. Something else is happening here.&#xA;&#xA;Ron Gilbert and Dave Grossman had free reign to do whatever they wanted. If they thought Guybrush should have a final confrontation with LeChuck before the rug was pulled, there would have been one. Stopping the story before it happens, making the player see a crummy secret, ending up in a cheap fun-house mirror version of the world... it all must add up to something. I couldn&#39;t see what it could be, at first.&#xA;&#xA;My current reading is this: The deliberate anticlimax you, as a player, feel, as you see behind the curtain, as you easily pop open that chest - Guybrush felt it too. How could he not? Elaine warns him, after rattling off the complaints of everyone he&#39;s screwed over along the way:&#xA;&#xA;  It&#39;s just that I&#39;m worried that The Secret can&#39;t possibly measure up to the effort and anticipation.&#xA;&#xA;Was it worth it? No. No treasure in the world could possibly have been worth it. Nothing in that chest could have made him happy.&#xA;&#xA;Confronted with that? Suddenly his quest must feel... fake. Hollow. Pointless and cheap. A trip to the amusement park that&#39;s over. Nothing left to do but to turn out the lights and go home.&#xA;&#xA;Guybrush has a distinctly different character arc in different Monkey Island games. In Secret of Monkey Island, he&#39;s - how shall I put this - a little shit. Sure, he&#39;s a sweet little doe-eyed boy who thinks pirates are swell and just wants to go on an adventure, but he&#39;s also the kind of kid who will walk into someone&#39;s house and just insult the owner until he&#39;s physically thrown out. Still, he doesn&#39;t really leave anyone worse off than he found them, except for maybe accidentally sinking the Sea Monkey with a giant boulder. But that&#39;s an honest mistake; it&#39;s possible to make Guybrush avoid doing it.&#xA;&#xA;In Monkey Island 2, Guybrush is, quite frankly, abusive and selfish. Everyone he meets, he throws directly under the bus. You have to steal poor Wally&#39;s monocle, without which he&#39;s blind. Get the cook fired to steal his job, which you immediately bail on. Saw off the Man of Low Moral Fiber&#39;s pegleg. Nail Stan into a coffin and leave him for dead. Get Kate Capsize arrested.&#xA;&#xA;Even Elaine isn&#39;t spared. First he crashes her party and steals her grandfather&#39;s map piece. Then when he&#39;s caught, Elaine reveals that she&#39;d dumped him. Here Guybrush can, at the players option, can endlessly neg her with quips like &#34;I came all this way to see you - at least get me a beer&#34; or &#34;Gosh you&#39;re cute when you&#39;re pretending to be mad.&#34; Eventually the player will catch on that the right strategy is to express how madly in love Guybrush still is with her; how miserable he is without her... and as soon as the walls start to come down and Elaine shows the slightest warmth towards him, he says, &#34;Does that mean you&#39;re going to let me have the map?&#34;&#xA;&#xA;In a way, Guybrush&#39;s arc in Return to Monkey Island seems similar - his obsession with finding The Secret at all costs mirroring his obsession with finding Big Whoop at all costs. Two differences stand out.&#xA;&#xA;The first is his reason for pursuing it. With Big Whoop, he wanted a new story that would command respect from his peers. He&#39;s driven by the fear that his biggest triumph is already behind him; that he has to top his last adventure in order to convince everyone that he truly is a mighty pirate, and that defeating LeChuck wasn&#39;t just a fluke. With The Secret, Guybrush is driven largely by his rivalry with LeChuck - he wants it because under no circumstances can LeChuck be allowed to find it first.&#xA;&#xA;The second is his relationships with the people around him. In MI2, Guybrush cares about nobody and nobody cares about him. He screws them over without giving them a second thought, confident that soon he&#39;ll find Big Whoop and be a big shot among pirates. In RtMI, on the other hand, he greets everyone on Melee like an old friend. He&#39;s cheerful and friendly with everyone new that he encounters. He&#39;s married to Elaine and clearly still deeply in love. The destruction he leaves behind is... different. A thoughtlessness without callousness. Guybrush takes Elaine&#39;s flyer because he needs it, and it hasn&#39;t occurred to him that this might cause a huge headache for his wife - not because he doesn&#39;t care if it does or not.&#xA;&#xA;But of course, intent isn&#39;t magic. Well-intentioned or not, his list of misdeeds rivals LeChuck&#39;s. &#34;I just hope it turns out to be worth all the effort,&#34; says Elaine (being, quite frankly, unreasonably supportive of her dipshit husband).&#xA;&#xA;So. Was a stupid t-shirt worth it? The environmental devastation caused by whittling a mop handle, of all things? The earthquake that burned down the Scumm Bar? The closing of the pirate museum, where so much of his history was being reverently kept? The implosion of an island&#39;s government? The torpedoing of his own wife&#39;s tireless efforts to cure a pirate plague?&#xA;&#xA;Of course not. It couldn&#39;t be.&#xA;&#xA;Guybrush gets to the very end of his quest and discovers that the contents of the chest is not what was important.&#xA;&#xA;What, in this story, might Guybrush decide was more important? The adventure? The story? Or... his friends? His wife, who is there waiting for him, as he turns out the lights in Amusement Park Melee Island, ready to take him home and face whatever comes next after the disappointment?&#xA;&#xA;Why is there no showdown with LeChuck? Because LeChuck was never the problem. The showdown is between Guybrush and The Secret, and I think Guybrush loses, in that he doesn&#39;t get what he thought he wanted, and also, he wins, in that, maybe he&#39;s finally able to see a childish fantasy for what it is, and start to focus on what is important in his life instead.&#xA;&#xA;The last line of the game is Elaine saying to Guybrush, &#34;I found the lost map to the treasure of Mire Island. It&#39;s going to be a fun adventure.&#34; Clearly, they will embark on it together.&#xA;&#xA;#games #essays]]&gt;</description>
      <content:encoded><![CDATA[<p>I wrote this analysis of Return to Monkey Island&#39;s ending on October 4th, a couple of weeks after finishing it, in a kind of manic fugue state, writing well past the time I should have been in bed. After I&#39;d written it, I decided it probably needed an editing pass before I posted it, but life immediately got in the way and I never looked at it again. Today I picked it up and read through it and decided not a word needed to be changed. So, without further ado, here it is.</p>

<p>HERE BE SPOILERS. You&#39;ve been warned.</p>

<p>So, the ending. What does it mean?</p>

<p>I am not interested in the question of “what &#39;really&#39; happened” – that&#39;s a boring question, and the game takes <em>great</em> pains to make clear that <em>the answer is not important</em>. What I <em>am</em> interested in is, “what is the game trying to say?”</p>

<p>Return to Monkey Island touches on a lot of themes and asks a lot of questions, but I&#39;m going to focus on two:</p>
<ul><li>What is The Secret of Monkey Island?</li>
<li>What makes Guybrush any better than LeChuck?</li></ul>

<p>The game&#39;s answer to the first question is clear: it is not important. It could be anything. Feel free to imagine whatever makes you happiest. This is what every character in the game who speculates about The Secret does, and this is what the game invites the player to do at the end.</p>

<p>The second question – the game doesn&#39;t call explicit attention to it very often, but it runs from the beginning of the game to the very end. The first real dialog tree of the game is Guybrush going through all the ways that he believes LeChuck has wronged him, while the lookout doesn&#39;t seem particularly convinced. You&#39;re both pirates, after all. He&#39;s after exactly the same thing as you. What makes you the hero and him the villain?</p>

<blockquote><p>He&#39;s an evil zombie ghost pirate, terror of the seas and sometimes the land! A nefarious, conniving, murdering scallywag! A depraved, ruthless villain! And a loud talker! And I&#39;m... not those things!</p></blockquote>

<p>But. You take a spot on LeChuck&#39;s crew anyway. You have casual, low-stakes conversations with him. You fill out LeChuck&#39;s <em>paperwork</em>. He&#39;s... just a pirate, who wants exactly the same thing as you, for similarly vague reasons.</p>

<p>On Monkey Island, you are forced to get even closer to LeChuck in order to break the magical voodoo lock on the map to The Secret – learning his favourite food, reading his diary, humming his theme song. In the end you&#39;re both double-crossed by Captain Madison and the new pirate leaders.</p>

<p>As you search for the golden keys in Part IV the game starts to make it really obvious that Guybrush&#39;s single-minded pursuit of The Secret is starting to have negative consequences on the people around him. Scenes that were played for laughs get re-contextualized as Elaine learns more about Guybrush has been up to. It&#39;s revealed that Guybrush even directly fucked up the project Elaine has been working tirelessly at for... however long it takes to turn an entire island into a grove of lime trees. A hell of a lot longer than Guybrush has been seeking out The Secret.</p>

<p>There is also apparently a subtle thing where the pamphlet boasting of LeChuck&#39;s accomplishment fills in as Guybrush does the same nasty stuff. The bingo card doesn&#39;t quite fill up – Guybrush can&#39;t murder anyone who gets in his way – but the game wants the player to pause and realize that Guybrush really can be quite the scoundrel. After all... he&#39;s a pirate.</p>

<p>Which leads us, finally, to the ending. After chasing LeChuck through an underground puzzle gauntlet back on Monkey Island once more, Guybrush walks through a door and ends up... on Amusement Park Melee Island. Stan appears and tells Guybrush it&#39;s closing time; Elaine asks if he&#39;s ready to go. An animatronic LeChuck and an animatronic Captain Lila are torturing an animatronic Locke Smith, in an endless loop, demanding the key that opens the chest that contains The Secret. There&#39;s also animatronic Otis, an animatronic Old Pirate Leader, an animatronic Widey Bones – everyone you&#39;d expect to meet on High Street is back, but as a cardboard cutout version of themselves.</p>

<p>From here you can pretty much pick your ending; they all end up essentially the same, with unique little 5-second videos that play after the (unskippable!) credits roll. There are 11 of these in total. YouTube is there and it doesn&#39;t take long, but you aren&#39;t really meant to watch them all – you&#39;re meant to pick the option that&#39;s most meaningful to you and that becomes the conclusion to <em>your</em> Monkey Island story.</p>

<p>In some ways this is a nice way to acknowledge that Monkey Island is many things to many people. But... if that&#39;s all it was, there are many more narratively satisfying ways to write it. You could have had Boybrush interrupt Guybrush right before he opens the chest and let the player pick what&#39;s inside as it&#39;s revealed, for example. Instead it&#39;s a t-shirt, and it&#39;s only after Boybrush complains that the player can make something else up. Something else is happening here.</p>

<p>Ron Gilbert and Dave Grossman had free reign to do whatever they wanted. If they thought Guybrush should have a final confrontation with LeChuck before the rug was pulled, there would have been one. Stopping the story before it happens, making the player see a crummy secret, ending up in a cheap fun-house mirror version of the world... it all must add up to something. I couldn&#39;t see what it could be, at first.</p>

<p>My current reading is this: The deliberate anticlimax you, as a player, feel, as you see behind the curtain, as you easily pop open that chest – Guybrush felt it too. How could he not? Elaine warns him, after rattling off the complaints of everyone he&#39;s screwed over along the way:</p>

<blockquote><p>It&#39;s just that I&#39;m worried that The Secret can&#39;t possibly measure up to the effort and anticipation.</p></blockquote>

<p>Was it worth it? No. No treasure in the world could possibly have been worth it. Nothing in that chest could have made him happy.</p>

<p>Confronted with that? Suddenly his quest must feel... fake. Hollow. Pointless and cheap. A trip to the amusement park that&#39;s over. Nothing left to do but to turn out the lights and go home.</p>

<p>Guybrush has a distinctly different character arc in different Monkey Island games. In Secret of Monkey Island, he&#39;s – how shall I put this – a little shit. Sure, he&#39;s a sweet little doe-eyed boy who thinks pirates are swell and just wants to go on an adventure, but he&#39;s also the kind of kid who will walk into someone&#39;s house and just insult the owner until he&#39;s physically thrown out. Still, he doesn&#39;t really leave anyone worse off than he found them, except for maybe accidentally sinking the Sea Monkey with a giant boulder. But that&#39;s an honest mistake; it&#39;s possible to make Guybrush avoid doing it.</p>

<p>In Monkey Island 2, Guybrush is, quite frankly, abusive and selfish. Everyone he meets, he throws directly under the bus. You have to steal poor Wally&#39;s monocle, without which he&#39;s blind. Get the cook fired to steal his job, which you immediately bail on. Saw off the Man of Low Moral Fiber&#39;s pegleg. Nail Stan into a coffin and leave him for dead. Get Kate Capsize arrested.</p>

<p>Even Elaine isn&#39;t spared. First he crashes her party and steals her grandfather&#39;s map piece. Then when he&#39;s caught, Elaine reveals that she&#39;d dumped him. Here Guybrush can, at the players option, can endlessly neg her with quips like “I came all this way to see you – at least get me a beer” or “Gosh you&#39;re cute when you&#39;re pretending to be mad.” Eventually the player will catch on that the right strategy is to express how madly in love Guybrush still is with her; how miserable he is without her... and as soon as the walls start to come down and Elaine shows the slightest warmth towards him, he says, “Does that mean you&#39;re going to let me have the map?”</p>

<p>In a way, Guybrush&#39;s arc in Return to Monkey Island seems similar – his obsession with finding The Secret at all costs mirroring his obsession with finding Big Whoop at all costs. Two differences stand out.</p>

<p>The first is his reason for pursuing it. With Big Whoop, he wanted a new story that would command respect from his peers. He&#39;s driven by the fear that his biggest triumph is already behind him; that he has to top his last adventure in order to convince everyone that he truly is a mighty pirate, and that defeating LeChuck wasn&#39;t just a fluke. With The Secret, Guybrush is driven largely by his rivalry with LeChuck – he wants it because under no circumstances can LeChuck be allowed to find it first.</p>

<p>The second is his relationships with the people around him. In MI2, Guybrush cares about nobody and nobody cares about him. He screws them over without giving them a second thought, confident that soon he&#39;ll find Big Whoop and be a big shot among pirates. In RtMI, on the other hand, he greets everyone on Melee like an old friend. He&#39;s cheerful and friendly with everyone new that he encounters. He&#39;s married to Elaine and clearly still deeply in love. The destruction he leaves behind is... different. A thoughtlessness without callousness. Guybrush takes Elaine&#39;s flyer because he needs it, and it hasn&#39;t occurred to him that this might cause a huge headache for his wife – not because he doesn&#39;t care if it does or not.</p>

<p>But of course, intent isn&#39;t magic. Well-intentioned or not, his list of misdeeds rivals LeChuck&#39;s. “I just hope it turns out to be worth all the effort,” says Elaine (being, quite frankly, unreasonably supportive of her dipshit husband).</p>

<p>So. Was a stupid t-shirt worth it? The environmental devastation caused by whittling a mop handle, of all things? The earthquake that burned down the Scumm Bar? The closing of the pirate museum, where so much of his history was being reverently kept? The implosion of an island&#39;s government? The torpedoing of his own wife&#39;s tireless efforts to cure a pirate plague?</p>

<p>Of course not. It couldn&#39;t be.</p>

<p>Guybrush gets to the very end of his quest and discovers that the contents of the chest is not what was important.</p>

<p>What, in this story, might Guybrush decide was <em>more</em> important? The adventure? The story? Or... his friends? His wife, who is there waiting for him, as he turns out the lights in Amusement Park Melee Island, ready to take him home and face whatever comes next after the disappointment?</p>

<p>Why is there no showdown with LeChuck? Because LeChuck was never the problem. The showdown is between Guybrush and The Secret, and I think Guybrush loses, in that he doesn&#39;t get what he thought he wanted, and also, he wins, in that, maybe he&#39;s finally able to see a childish fantasy for what it is, and start to focus on what is important in his life instead.</p>

<p>The last line of the game is Elaine saying to Guybrush, “I found the lost map to the treasure of Mire Island. It&#39;s going to be a fun adventure.” Clearly, they will embark on it together.</p>

<p><a href="https://blog.information-superhighway.net/tag:games" class="hashtag"><span>#</span><span class="p-category">games</span></a> <a href="https://blog.information-superhighway.net/tag:essays" class="hashtag"><span>#</span><span class="p-category">essays</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/return-to-monkey-island-that-ending</guid>
      <pubDate>Mon, 12 Dec 2022 19:33:45 +0000</pubDate>
    </item>
    <item>
      <title>Pow! Zap! Comics on the internet? On the big screen! Biff!!</title>
      <link>https://blog.information-superhighway.net/pow-zap-comics-on-the-internet-on-the-big-screen-biff</link>
      <description>&lt;![CDATA[On Feb 9, I made the following comment on my Mastodon:&#xA;&#xA;  still can&#39;t believe the shithead behind Pupkin managed to get his webcomic turned into a movie with Jennifer Lopez&#xA;&#xA;Nobody responded to this in any way, because, quite frankly, nobody who wasn&#39;t an absolute fanatical follower of webcomics discourse 20 years ago has any idea what the fuck this means. Hell, I barely understand my feelings about this. But to do them justice, we are going to have to unpack 25 years of webcomics history.&#xA;&#xA;The process of researching and writing this has rearranged my view of the formative years of webcomics. I&#39;ve done my best to cite my sources, but primary sources are not always readily available anymore. I ended up using the Wayback Machine so much that it locked me out for a few hours. Support the Internet Archive.!--more--&#xA;&#xA;Keenspot: Origins&#xA;&#xA;Let&#39;s go chronologically. In January 1997, Darren Bluel started publishing a webcomic called &#34;Nukees&#34;. It&#39;s more or less the prototypical &#34;me and my wacky college friends&#34; strip, in that it stars thinly veiled versions of the author and his wacky college friends. 25 years later, it&#39;s still being regularly updated. Nukees is published with a Perl script called AutoKeen. We&#39;ll talk about AutoKeen in more depth in a bit.&#xA;&#xA;In March 1999, Chris Crosby started publishing a comic called Superosity. The main characters include a lovable idiot named Chris, his hyperintelligent surfboard-shaped robot friend named Boardy, and his evil asshole brother, Bobby. Bobby is drawn with all-black eyes and if he had a catchphrase it would probably be &#34;Shut the hell up, you stupid idiot!&#34;&#xA;&#xA;Remember Bobby. He&#39;s important.&#xA;&#xA;Superosity is also regularly updated - very regularly. Literally every single day since its inception. It updated today, the day you are reading this. No vacations, no filler days, ever, for almost 23 years. There was a contest called &#34;The Daily Grind Iron Man Challenge&#34; where you had to update your comic on time, Monday to Friday, every day, no exceptions, and when Crosby was late with an update people were legitimately worried he had died. (Turns out a snowstorm had knocked out his power.)&#xA;&#xA;Aside: I discovered while writing this that the Daily Grind Iron Man Challenge ran for fifteen years before a single victor finally claimed the prize in 2020.&#xA;&#xA;By 2000 webcomics were exploding, and the place to find new comics to read was Big Panda - a giant list of comics, ranked by a peculiar algorithm. Many of the comics listed on it were also hosted there. Here&#39;s a bunch of interviews about Big Panda with webcomic artists who were there, conducted in 2006.&#xA;&#xA;The algorithm was meant to reward comics for prominently linking to Big Panda (generally using a dropdown list of Big Panda comics, injected into your page with JavaScript) and encouraging their readers to check out other comics linked to by Big Panda. Yes, even in 1999, The Algorithm ranking content by engagement was there, although it was at least explicit and openly explained. Of course that didn&#39;t stop it from being mercilessly gamed and exploited. &#xA;&#xA;Looking at the Wayback Machine from November 1999, I pretty much read all of these. Big Panda was where webcomics lived.&#xA;&#xA;In early 2000, the creator of Big Panda lost interest in maintaining it, and the site started to go down regularly. So Darren Bluel, the creator of Nukees, Chris Crosby, the creator of Superosity, Teri Crosby, Chris&#39; mother and the colourist of Superosity, and some dude named Nate Stone who ended up CTO but I couldn&#39;t discover much else about, launched a new webcomics portal called Keenspot, in March of 2000.&#xA;&#xA;Keenspot provided a similar service to Big Panda - a one-stop portal where you could discover other webcomics, cross-promotional tools so everyone on Keenspot could boost the profile of everyone else on Keenspot, and hosting. Unlike Big Panda, however, they wouldn&#39;t just add anyone to the list - Keenspot had to sign you. And Keenspot started by signing virtually all of the most popular comics on Big Panda, which means Keenspot started with an exceptionally strong lineup.&#xA;&#xA;A few months after Keenspot launched, they launched Keenspace. This was Keenspot&#39;s weird little brother; anyone could create a Keenspace account and start publishing comics. You wouldn&#39;t get the full promotional might of Keenspot, but you&#39;d get the same hosting and publishing tools. &#xA;&#xA;If your Keenspace comic was good, and updated regularly, the promise was that you might get &#34;promoted&#34; to Keenspot. For a time, this meant that there was prestige attached to being a Keenspot comic. Having your comic on Keenspot meant that it had Made It. People who were into webcomics paid attention. It was News.&#xA;&#xA;Keenspace was the Geocities of webcomics and therefore, unsurprisingly, I loved it; a perfect example of the y2k-era web where a guy with a server and a janky perl script could provide thousands of people with a home for their creative work. It&#39;s still around, though in 2004 the name changed to Comic Genesis, I guess so it wouldn&#39;t be confused with Keenspot.&#xA;&#xA;As might be obvious from the name, Keenspot / Keenspace was built on Darren Bluel&#39;s AutoKeen. Prior to AutoKeen, artists would generally roll their own HTML by hand; AutoKeen was a big improvement in tooling for its time. The way it works: artists upload specially named image files and HTML templates via FTP. AutoKeen runs nightly, generating the day&#39;s new archive pages and front page. You could upload comics in advance and they would be automatically published on the date given in the filename. The archives were accessible via little HTML calendars, built from &amp;lt;table&amp;gt;s.&#xA;&#xA;Bluel even released a &#34;lite&#34; version of Autokeen as public domain open source.&#xA;&#xA;Keenspot also wasted no time in organizing significant cross-promotional events, like, uh, Bikeeni Summer 2000, where a bunch of Keenspot cartoonists drew pinup art of their characters in swimsuits. Weird horniness of early 2000s webcomics aside, what I want to convey is that when Keenspot launched, its members were excited to join and support each other. It was a group of peers trying to make something exciting and unprecedented happen. The web was providing a new opportunity to folks who had been shut out of the old way of doing things; rejected by newspaper syndicates, producing work that could never be published in a mainstream way.&#xA;&#xA;That&#39;s not to say that everyone thought everything about Keenspot was entirely keen - see, for example, keenparody, a comic (hosted on Keenspace, and which Chris Crosby linked from his Superosity news page) that portrays, among other nastiness, Darren Bluel cutting open the corpse of a panda with a chainsaw. It&#39;s, uh, not subtle. (Also CW for nudity in some of the other strips. I wasn&#39;t kidding about the weird horniness of the era.)&#xA;&#xA;Enter Bobby Crosby&#xA;&#xA;Now, remember Bobby? Turns out Chris Crosby has a brother in real life named Bobby. In 2002, Bobby Crosby began publishing Pupkin. Pupkin is a round orange dog, who can&#39;t seem to find his forever home.&#xA;&#xA;I won&#39;t say that nobody liked Pupkin - it managed to attract some fans - but it is generally not well-remembered. Pupkin isn&#39;t particularly well-drawn or funny. The punchline of the first strip is a baby saying &#34;Pupkin&#34; while looking at a round orange dog. The punchline of the second strip is Pupkin worried that he might have AIDS. &#xA;&#xA;It turns out that Bobby Crosby does not take criticism well. When someone writes anything negative about his comics, he shows up in the comments to yell at them. I haven&#39;t found anything from 2002 about Pupkin specifically, but here he is going absolutely nuclear on someone who rated one of his comics 3/5 stars:&#xA;&#xA;  Nothing that you said in your review made any sense on any level and at least half of everything you just said is a lie...&#xA;&#xA;  I could go on and on and on forever refuting all of your nonsense points, but I’m trying to stop wasting time doing such things and yours would be the longest of all. I mean, you’re yet another person who actually somehow thinks that ALL OF THE ZOMBIES USED TO BE VAMPIRES, as if the world was made up of seven billion vampires before this all started instead of seven billion humans. You CAN’T READ.&#xA;&#xA;  I don’t give a shit about word choice and tone and remaining civil. I hate everyone and everything, especially morons like you idiots who can’t read.&#xA;&#xA;I recall Bobby Crosby showing up absolutely everywhere his comic was mentioned to respond to everything. If I remember right, Bobby would sign off all of his responses with &#34;Thank you for loving Pupkin&#34;, no matter how much the original commenter had hated Pupkin. If you google this phrase now, you&#39;ll find that this became a Something Awful meme.&#xA;&#xA;By 2009 (I will link the article later), Gary of the webcomics blog Fleen is saying things like:&#xA;&#xA;  Bobby Crosby tossed in his two cents (at some point in the future, there may well be a “Bobby’s Law”, the point after which no useful discussion on a webcomics topic can take place).&#xA;&#xA;You know you&#39;re a level-headed individual when someone decides that they need to name a variant of Godwin&#39;s Law after you.&#xA;&#xA;As far as I am able to discern, Pupkin was never officially a Keenspot comic. But it had a Keenspot banner at the top, was clearly using AutoKeen, and as far as I can tell, there was never an associated Keenspace account. Chris Crosby linked to it with a prominent banner on Superosity&#39;s sidebar.&#xA;&#xA;Trouble brewing&#xA;&#xA;2002 is also when we really started to see some other groups outside of Keenspot try to take on webcomics. Joey Manley started Modern Tales, which took the approach of selling paid subscriptions to access comic archives. Several high-profile cartoonists quit their popular Keenspot comics to go independent - Jeff Rowland ended &#34;When I Grow Up&#34; and started &#34;Wigu&#34;, John Allison ended &#34;Bobbins&#34; and started &#34;Scary Go Round&#34;. Both were part of a loose collective called &#34;Dumbrella&#34;, which had a portal to some comics, but was largely a message board community site where cartoonists hung out with each other and their readerships.&#xA;&#xA;By 2005, the Dumbrella wiki had this to say about Keenspot:&#xA;&#xA;  Keenspot is now a festering assland of a turdhole powered by Chris Crosby&#39;s gravitational pull and haunted by the tortured souls of captured Internet cartoonists. &#xA;&#xA;I would wager that this was probably written by a member of the message boards, rather than an artist, but still. It&#39;s not totally obvious to an outsider exactly what was so egregious about Keenspot that caused talented artists to flee with that level of acrimony. But clearly something went very wrong.&#xA;&#xA;In 2004, Keenspot moved its corporate headquarters in Cresbard, South Dakota, population 121. The people of Cresbard had decided, due to its dwindling population, to close its high school. Keenspot bought the high school on the cheap, and as far as I can tell, Chris Crosby and his mom Teri moved to rural South Dakota to run Keenspot there. There is a Superosity storyline about how terrible an idea moving to South Dakota was, posted shortly after the blizzard that knocked Crosby out of the Daily Grind.&#xA;&#xA;That&#39;s weird, right? Moving your internet media company to an abandoned high school in rural South Dakota? That&#39;s kind of an unhinged thing to do?&#xA;&#xA;Blatant Comics and the push for More Bobby&#xA;&#xA;In 2006, Bobby Crosby launched two new comics, both now fully backed by Keenspot: +EV, a comic strip about online poker, and Last Blood, a zombie-themed graphic novel serialized online on Keenspot, published in print by Blatant Comics, and which Bobby very explicitly wants, from day one, to turn into a movie. Bobby is the writer for both of these comics, but this time is collaborating with artists. (Even Pupkin&#39;s most ardent fans would have to admit that its art is pretty amateurish.)&#xA;&#xA;Wait. What&#39;s Blatant Comics?&#xA;&#xA;In 2007, Blatant Comics had a handful of different books by a couple of different artists that they were advertising. They&#39;re... embarassing shlock. There&#39;s a superhero parody of The Office, there&#39;s Bobby&#39;s zombie book, there&#39;s &#34;Impeach Bush! A Funny Li&#39;l Graphical Novel About The Worstest Pres&#39;dent In The History Of Forevar&#34;, and there&#39;s &#34;Dead Sonja&#34;, which is a zombie parody of the Marvel hero &#34;Red Sonja&#34;, I guess? The company appears to be called &#34;Blatant Comics&#34; so that they can put &#34;A Blatant Parody&#34; on the cover. Their webpage displays their address in the footer... Cresbard, SD.&#xA;&#xA;Blatant Comics has a Wikipedia page, which I am gonna quote from:&#xA;&#xA;  Blatant Comics is an independent American comic book publisher founded in 1997 by Chris Crosby... Blatant is known for publishing parody comic books such as Sloth Park, XXXena: Warrior Pornstar, and Dead Sonja: She-Zombie with a Sword...&#xA;&#xA;So prior to Keenspot, Chris Crosby wasn&#39;t just making webcomics - he was publishing print comics. Awful, pulpy, dumb parody print comics.&#xA;&#xA;In 1997 Blatant Comics published something called &#34;The EboniX-Files&#34; with taglines like &#34;The truth be out there, g!&#34; and &#34;Yo, bust a cap in da future&#39;s ass!&#34; and thought this joke was clever enough that they ALSO, published in the SAME BOOK, included &#34;The Ungrammatical EboniX-Men&#34;. Chris Crosby is credited as the writer.&#xA;&#xA;I vaguely remember some shitty stereotypes of Black people in Superosity, and honestly it&#39;s possible they may have been as bad as this, I don&#39;t remember. But, yiiiiiiikes! Fuck this!&#xA;&#xA;In 2006, it would appear that Chris was using his old print comics company, which hadn&#39;t published much of anything since Keenspot launched, to try to kickstart the career of his brother Bobby. He certainly wasn&#39;t using it to promote anyone else&#39;s Keenspot comic. Today, Blatant Comics&#39; website is nothing but Bobby Crosby, and in fact looks startlingly similar to bobbycrosby.com.&#xA; &#xA;Chris also launched a new Keenspot comic towards the end of 2006: &#34;WICKEDPOWERED&#34;, a paid advertisement for &#34;a laser pointer with a range of 141 miles that can melt a garbage bag.&#34; WICKEDPOWERED ran for a year and a half before the sponsor pulled the plug.&#xA;&#xA;It seems clear that by 2006, we are looking at a Keenspot that is not driven by a group of peers trying to create a new medium and support each other - it is being driven, more and more, by the Crosbys&#39; desire to cash in as hard as they possibly can.&#xA;&#xA;Keenspot becomes The Crosby Show&#xA;&#xA;In March 2008, Chris and Teri Crosby buy Darren Bluel and Nate Stone&#39;s stake in Keenspot, leaving the Crosbys as the sole owners. Chris was interviewed by comixtalk.com about it. I find this quote incredibly revealing:&#xA;&#xA;  Eight years later, we failed to reach most of our goals.  I&#39;m hoping to turn that around, so that five or ten years later I can look back and be a little prouder of what Keenspot is.  We&#39;ll see.&#xA;&#xA;To me, Keenspot was absolutely at its most successful, its most magical, right at the start; all those talented artists, breaking free from the gatekeepers of newspaper comics syndicates, using new tools to find new audiences, producing work that couldn&#39;t be published any other way. What are the goals for Keenspot that they failed to reach?&#xA;&#xA;Here&#39;s Chris posting to the Bad Webcomics Wiki forums in June 2009:&#xA;&#xA;  I enjoy working on SORE THUMBS and WICKEDPOWERED, but they were written primarily for the money, not because they&#39;re the kind of thing I love to write. They both are purposefully created to be as dumb and pandering as possible. Heck, WICKEDPOWERED was a paid advertisement for a handheld laser manufacturer. And CROW SCARE is intended to be a SCI FI Channel original movie illustrated on cheap newsprint. &#xA;    Yes, I haven&#39;t aimed very high thus far. Bobby&#39;s goal with everything but PUPKIN and +EV has been to write fantastic blockbuster movies in graphic novel form. Maybe I should try that…&#xA;&#xA;Someone replies:&#xA;&#xA;  So you&#39;re basically admitting to being a hack that panders to the lowest common denominator for money?&#xA;&#xA;And Chris responds, simply:&#xA;&#xA;  I like money.&#xA;&#xA;Keenspot begins to implode&#xA;&#xA;At this point in time, Keenspot has slowly gone from The Place For Quality Webcomics to an aging portal whose design has not significantly changed in 8 years, and whose creators are still on it largely out of inertia. AutoKeen has not been updated in any significant way - it hasn&#39;t been upgraded to do anything as basic and obvious as generating RSS feeds, for example. By 2009 Google Reader would have been in its heydey, and RSS feeds a perfect fit for daily comics. (AutoKeen would eventually be rewritten out of frustration by an administrator of Keenspace around 2010, but didn&#39;t necessarily add much in the way of new features - just made it easier to diagnose when it failed.) The Comicpress Wordpress theme has been out for years, making self-hosting comics with archives more accessible than ever. Comics are starting to take off on Tumblr and Twitter. As far as I can tell, Keenspot artists are still carefully naming their files, uploading them to FTP sites, and hand-tweaking bespoke HTML templates. Keenspot as a technology, in terms of the service it provides for creators, is completely stagnant and neglected.&#xA;&#xA;July 2009 - Jodie Troutman, a fairly prominent cartoonist who had &#34;graduated&#34; from Keenspace to Keenspot, is fired from Keenspot for no particular reason that anyone wants to disclose.  (CW: deadnaming in linked article) Nothing like this has happened before in Keenspot&#39;s history. Troutman&#39;s response:&#xA;&#xA;  Though I believe my membership was terminated unjustly and through no fault of my own, I suspect I’ll be much better off without Keenspot, whose management I never really saw eye-to-eye with.  All my friends have had great success as indie webcomics, so I can only hope to follow in their footsteps.&#xA;&#xA;I see this sort of sentiment a lot - that Keenspot isn&#39;t really helping creators, and that publishing independently is a much better value proposition.&#xA;&#xA;December 2009 - Kel McDonald is fired from Keenspot and publicly airs some dirty laundry about how unprofessional and basically useless Keenspot leadership has been towards its artists not named Crosby, and about how nobody is getting paid on time.&#xA;&#xA;Teri Crosby responds in the comments with a polite point-by-point rebuttal, signing off with &#34;Smiles&#34;. &#xA;&#xA;Bobby Crosby responds in the comments accusing everyone of being liars, ranting about how tiny and insignificant Keenspot is, how anything it does for its creators is a gift and above and beyond what they deserve and that they should be grateful, and posits that Keenspot should just be shut down. &#xA;&#xA;  &#34;To have a booth presence mismanaged year after year, as Keenspot&#39;s frequently is, is unacceptable this far down the road.&#34;&#xA;    Why??? Who cares??? What does it matter to you? Keenspot is a tiny company that shouldn&#39;t even have a booth at SDCC in the first place but does so anyway mostly just as a little bonus to its members, a little gift. Are you the same type of person who turns down a free gift because it&#39;s not nice enough for someone of your stature??? Who gives a fuck???&#xA;    &#34;We all know you guys have Keenspot on autopilot and are basically using it to fund Chris and Bobby&#39;s side projects.&#34;&#xA;    It&#39;s the OPPOSITE. Our projects keep Keenspot alive so even more time and money can be wasted on it because its owners for some reason love a bunch of people who mostly hate them because of your lies.&#xA;    No one in my family lives in a school.&#xA;&#xA;There&#39;s lots going on in the comments but I would like to quote this section of Scott Kurtz&#39;s reply to Bobby:&#xA;&#xA;  Blatant and Keenspot are these very disparate entities when it suits you and sister companies when it suits you. You own both companies. You took the Keenspot booth this year and gave half to Blatant.&#xA;&#xA;The Keen dream is dead&#xA;&#xA;Within days after Kel McDonald&#39;s firing, Keenspot posts new mandatory contracts that all of its artists will need to sign in six months, or else leave Keenspot. The terms of this contract are... unfavourable towards creators, compared to what came before. These contracts are not sent to creators directly but instead quietly dumped in a private forum post, leaving many in the dark. Webcomics blog Fleen published a detailed exposé.&#xA;&#xA;As one creator puts it:&#xA;&#xA;  Every Keenspot member I’ve spoken to agrees that this is the Crosbys’ way of firing everyone without having to fire anyone, since trying to ditch Kel [McDonald] blew up in their faces.&#xA;    The new contract is ridiculous, completely unreasonable, and they know that. It doesn’t just mandate a revenue split, but requires cartoonists to give up their domains, and the contracts are slated to last three to five years.&#xA;&#xA;Chris Crosby doesn&#39;t really disagree:&#xA;&#xA;  As well as not inviting or accepting any new members, we may also politely decline existing members who decide to sign the new contract. We’ll be having long discussions with each interested creator (assuming there are any) in order to work out what’s mutually beneficial and what’s not. If Keenspot cannot bring something substantial to the table for the creator in question, we will stop working with them.&#xA;    I had hoped Keenspot the webcomics collective and Keenspot the independent publishing concern could co-exist happily. But after two years [following a 2008 reorganizaton and the buyout of former partners] the resounding answer is no. Those two sides of Keenspot resent each other, and neither side is happy.&#xA;    [G]oing forward our focus will be directed solely at properties we have a long-term investment in, which is primarily Crosby-produced comics and related projects. That’s what makes the most business sense for us as a company, and we make no apologies for it. &#xA;&#xA;Aftermath&#xA;&#xA;Have a look at Chris Crosby&#39;s print comic credits, paying attention to the stuff with the &#34;Keenspot&#34; banner on top. &#34;Fartnite.&#34; &#34;Yang Gang.&#34; &#34;Barry Steakfries: From the Files of Jetpack Joyride.&#34; Chris Crosby started his career selling trash and it appears that&#39;s how he&#39;s determined to end it.&#xA;&#xA;There is a big list of comics on the Keenspot frontpage; only the titles in bold are &#34;currently updating&#34;. 7 comics are marked in bold. One of them is Marry Me, which completed its run in 2008, but for obvious reasons has ads in every news box and site banner. Another is Head Trip, which hasn&#39;t updated since July 2017. A third is No Pink Ponies, which last updated March 2018.&#xA;&#xA;In 2022, Keenspot is 4 comics.&#xA;&#xA;Marry Me&#xA;&#xA;In February 2007, Bobby Crosby and artist Remi &#34;Eisu&#34; Mokhtar launch &#34;Marry Me&#34;, at the URL marrymemovie.com. The URL and the commentary for the first page makes it clear: this is a graphic novel for which Bobby will be adapting into a screenplay and trying to sell to Hollywood. The webcomic exists as an elaborate movie pitch.&#xA;&#xA;You might be aware that &#34;Marry Me&#34; is now, in 2022, a big-budget Hollywood romcom, starring Jennifer Lopez and Owen Wilson. It took 15 years and the total implosion of Keenspot, a breathtaking, agonizing squandering of an enormous amount of actual talent in service of one unbearable asshole, but the motherfucker actually did it.&#xA;&#xA;And yet. None of the marketing around this movie mentions the comic it was based on. Nobody going to see it knows Bobby Crosby&#39;s name, or Blatant Comics, or Keenspot. The Crosbys spent 16 years_ pushing Bobby&#39;s work, and in the end of all that is this disposable rom-com; every creative decision made by everyone involved for the sole reason that they thought it would sell.&#xA;&#xA;I wonder: is Chris Crosby able to look back and be proud of that?&#xA;&#xA;#webcomics #essays]]&gt;</description>
      <content:encoded><![CDATA[<p>On Feb 9, I made the following comment on my Mastodon:</p>

<blockquote><p>still can&#39;t believe the shithead behind Pupkin managed to get his webcomic turned into a movie with Jennifer Lopez</p></blockquote>

<p>Nobody responded to this in any way, because, quite frankly, nobody who wasn&#39;t an absolute fanatical follower of webcomics discourse 20 years ago has any idea what the fuck this means. Hell, <em>I</em> barely understand my feelings about this. But to do them justice, we are going to have to unpack 25 years of webcomics history.</p>

<p>The process of researching and writing this has rearranged my view of the formative years of webcomics. I&#39;ve done my best to cite my sources, but primary sources are not always readily available anymore. I ended up using the Wayback Machine so much that it locked me out for a few hours. <a href="https://archive.org/donate">Support the Internet Archive</a>.</p>

<h2 id="keenspot-origins">Keenspot: Origins</h2>

<p>Let&#39;s go chronologically. In January 1997, Darren Bluel started publishing a webcomic called “<a href="https://www.nukees.com/">Nukees</a>”. It&#39;s more or less the prototypical “me and my wacky college friends” strip, in that it stars thinly veiled versions of the author and his wacky college friends. 25 years later, it&#39;s still being regularly updated. Nukees is published with a Perl script called AutoKeen. We&#39;ll talk about AutoKeen in more depth in a bit.</p>

<p>In March 1999, Chris Crosby started publishing a comic called <a href="http://superosity.keenspot.com/">Superosity</a>. The main characters include a lovable idiot named Chris, his hyperintelligent surfboard-shaped robot friend named Boardy, and his evil asshole brother, Bobby. Bobby is drawn with all-black eyes and if he had a catchphrase it would probably be “<a href="http://superosity.keenspot.com/d/19990304.html">Shut the hell up, you stupid idiot!</a>“</p>

<p>Remember Bobby. He&#39;s important.</p>

<p>Superosity is also regularly updated – very regularly. Literally every single day since its inception. It updated today, the day you are reading this. No vacations, no filler days, ever, for almost 23 years. There was a contest called “<a href="https://web.archive.org/web/20100106112305/http://www.crowncommission.com:80/dailygrind/">The Daily Grind Iron Man Challenge</a>” where you had to update your comic on time, Monday to Friday, every day, no exceptions, and when Crosby was late with an update <a href="https://web.archive.org/web/20061110092456/http://www.websnark.com/archives/2005/11/okay_i_admit_it.html"><em>people were legitimately worried he had died</em></a>. (Turns out a snowstorm had knocked out his power.)</p>

<p>Aside: I discovered while writing this that the Daily Grind Iron Man Challenge <a href="https://twitter.com/mikkihel/status/1287713550811758592">ran for fifteen years before a single victor finally claimed the prize in 2020</a>.</p>

<p>By 2000 webcomics were exploding, and the place to find new comics to read was <a href="https://web.archive.org/web/19991129003559/http://bigpanda.net/">Big Panda</a> – a giant list of comics, ranked by a peculiar algorithm. Many of the comics listed on it were also hosted there. <a href="http://comixtalk.com/biggie_panda_old_skool_webcomics/">Here&#39;s a bunch of interviews about Big Panda with webcomic artists who were there, conducted in 2006</a>.</p>

<p>The algorithm was meant to reward comics for prominently linking to Big Panda (generally using a dropdown list of Big Panda comics, injected into your page with JavaScript) and encouraging their readers to check out other comics linked to by Big Panda. Yes, even in 1999, The Algorithm ranking content by engagement was there, although <a href="https://web.archive.org/web/19991129021536/http://bigpanda.net/faq.html">it was at least explicit and openly explained</a>. Of course that didn&#39;t stop it from being mercilessly gamed and exploited.</p>

<p>Looking at <a href="https://web.archive.org/web/19991129003559/http://bigpanda.net/">the Wayback Machine from November 1999</a>, I pretty much read <em>all</em> of these. Big Panda was where webcomics <em>lived</em>.</p>

<p>In early 2000, the creator of Big Panda lost interest in maintaining it, and the site started to go down regularly. So Darren Bluel, the creator of Nukees, Chris Crosby, the creator of Superosity, Teri Crosby, Chris&#39; mother and the colourist of Superosity, and some dude named Nate Stone who ended up CTO but I couldn&#39;t discover much else about, launched a new webcomics portal called <a href="http://www.keenspot.com/">Keenspot</a>, in March of 2000.</p>

<p>Keenspot provided a similar service to Big Panda – a one-stop portal where you could discover other webcomics, cross-promotional tools so everyone on Keenspot could boost the profile of everyone else on Keenspot, and hosting. Unlike Big Panda, however, they wouldn&#39;t just add anyone to the list – <a href="https://web.archive.org/web/20000816220700/http://keenspot.com/faq.html">Keenspot had to sign you</a>. And Keenspot started by signing virtually all of the most popular comics on Big Panda, which means Keenspot started with an exceptionally strong lineup.</p>

<p>A few months after Keenspot launched, they launched <a href="https://web.archive.org/web/20000815095815/http://www.keenspace.com/">Keenspace</a>. This was Keenspot&#39;s weird little brother; anyone could create a Keenspace account and start publishing comics. You wouldn&#39;t get the full promotional might of Keenspot, but you&#39;d get the same hosting and publishing tools.</p>

<p>If your Keenspace comic was good, and updated regularly, the promise was that you might get “promoted” to Keenspot. For a time, this meant that there was prestige attached to being a Keenspot comic. Having your comic on Keenspot meant that it had Made It. People who were into webcomics paid attention. It was News.</p>

<p>Keenspace was the Geocities of webcomics and therefore, unsurprisingly, I loved it; a perfect example of the y2k-era web where a guy with a server and a janky perl script could provide thousands of people with a home for their creative work. It&#39;s still around, though in 2004 the name changed to <a href="http://www.comicgenesis.com/">Comic Genesis</a>, I guess so it wouldn&#39;t be confused with Keenspot.</p>

<p>As might be obvious from the name, Keenspot / Keenspace was built on Darren Bluel&#39;s AutoKeen. Prior to AutoKeen, artists would generally roll their own HTML by hand; AutoKeen was a big improvement in tooling for its time. The way it works: artists upload specially named image files and HTML templates via FTP. AutoKeen runs nightly, generating the day&#39;s new archive pages and front page. You could upload comics in advance and they would be automatically published on the date given in the filename. The archives were accessible via little HTML calendars, built from &lt;table&gt;s.</p>

<p>Bluel even released a <a href="http://www.keenspot.com/downloads/">“lite” version of Autokeen</a> as public domain open source.</p>

<p>Keenspot also wasted no time in organizing significant cross-promotional events, like, uh, <a href="https://web.archive.org/web/20000816214929/http://bikeeni2000.keenspot.com/">Bikeeni Summer 2000</a>, where a bunch of Keenspot cartoonists drew pinup art of their characters in swimsuits. Weird horniness of early 2000s webcomics aside, what I want to convey is that when Keenspot launched, its members were excited to join and support each other. It was a group of peers trying to make something exciting and unprecedented happen. The web was providing a new opportunity to folks who had been shut out of the old way of doing things; rejected by newspaper syndicates, producing work that could never be published in a mainstream way.</p>

<p>That&#39;s not to say that <em>everyone</em> thought everything about Keenspot was entirely keen – see, for example, <a href="http://keenparody.comicgenesis.com/d/20001205.html">keenparody</a>, a comic (hosted on Keenspace, and which Chris Crosby linked from his Superosity news page) that portrays, among other nastiness, Darren Bluel <a href="http://keenparody.comicgenesis.com/d/20001218.html">cutting open the corpse of a panda with a chainsaw</a>. It&#39;s, uh, not subtle. (Also CW for nudity in some of the other strips. I wasn&#39;t kidding about the weird horniness of the era.)</p>

<h2 id="enter-bobby-crosby">Enter Bobby Crosby</h2>

<p>Now, remember Bobby? Turns out Chris Crosby has a brother in real life named Bobby. In 2002, Bobby Crosby began publishing <a href="http://www.bobbycrosby.com/pupkin/d/20020701.html">Pupkin</a>. Pupkin is a round orange dog, who can&#39;t seem to find his forever home.</p>

<p>I won&#39;t say that nobody liked Pupkin – it managed to attract some fans – but it is generally not well-remembered. Pupkin isn&#39;t particularly well-drawn or funny. The punchline of the first strip is a baby saying “Pupkin” while looking at a round orange dog. The punchline of the second strip is Pupkin worried that he might have AIDS.</p>

<p>It turns out that Bobby Crosby does not take criticism well. When someone writes anything negative about his comics, he shows up in the comments to yell at them. I haven&#39;t found anything from 2002 about Pupkin specifically, but here he is <a href="https://webcomicoverlook.wordpress.com/2008/02/27/the-webcomic-overlook-34-last-blood/#comment-347">going absolutely nuclear on someone who rated one of his comics 3/5 stars</a>:</p>

<blockquote><p>Nothing that you said in your review made any sense on any level and at least half of everything you just said is a lie...</p>

<p>I could go on and on and on forever refuting all of your nonsense points, but I’m trying to stop wasting time doing such things and yours would be the longest of all. I mean, you’re yet another person who actually somehow thinks that ALL OF THE ZOMBIES USED TO BE VAMPIRES, as if the world was made up of seven billion vampires before this all started instead of seven billion humans. You CAN’T READ.</p>

<p>I don’t give a shit about word choice and tone and remaining civil. I hate everyone and everything, especially morons like you idiots who can’t read.</p></blockquote>

<p>I recall Bobby Crosby showing up absolutely everywhere his comic was mentioned to respond to everything. If I remember right, Bobby would sign off all of his responses with “Thank you for loving Pupkin”, no matter how much the original commenter had hated Pupkin. If you google this phrase now, you&#39;ll find that this became a Something Awful meme.</p>

<p>By 2009 (I will link the article later), Gary of the webcomics blog Fleen is saying things like:</p>

<blockquote><p> Bobby Crosby tossed in his two cents (at some point in the future, there may well be a “Bobby’s Law”, the point after which no useful discussion on a webcomics topic can take place).</p></blockquote>

<p>You know you&#39;re a level-headed individual when someone decides that they need to name a variant of Godwin&#39;s Law after you.</p>

<p>As far as I am able to discern, Pupkin was never <em>officially</em> a Keenspot comic. But it <a href="https://web.archive.org/web/20020924095838/http://bobbycrosby.com/">had a Keenspot banner at the top</a>, was clearly using AutoKeen, and as far as I can tell, there was never an associated Keenspace account. Chris Crosby <a href="https://web.archive.org/web/20020923150448/http://superosity.com/">linked to it with a prominent banner on Superosity&#39;s sidebar</a>.</p>

<h2 id="trouble-brewing">Trouble brewing</h2>

<p>2002 is also when we really started to see some other groups outside of Keenspot try to take on webcomics. Joey Manley started <a href="https://en.wikipedia.org/wiki/Modern_Tales">Modern Tales</a>, which took the approach of selling paid subscriptions to access comic archives. Several high-profile cartoonists quit their popular Keenspot comics to go independent – Jeff Rowland <a href="https://web.archive.org/web/20020605225942/http://www.whenigrowup.net/d/20020101.html">ended “When I Grow Up”</a> and <a href="https://web.archive.org/web/20020607225417/http://www.wigu.com/comics/wigu0001.html">started “Wigu”</a>, John Allison <a href="https://web.archive.org/web/20020926033527/http://www.bobbins.org/">ended “Bobbins”</a> and <a href="https://web.archive.org/web/20020726030835/https://scarygoround.com/">started “Scary Go Round”</a>. Both were part of a loose collective called “<a href="https://web.archive.org/web/20020921222541/http://dumbrella.com/">Dumbrella</a>”, which had a portal to some comics, but was largely a message board community site where cartoonists hung out with each other and their readerships.</p>

<p>By 2005, the <a href="https://web.archive.org/web/20050501060140/http://www.dumbwiki.com:80/index.php?title=Keenspot">Dumbrella wiki had this to say about Keenspot</a>:</p>

<blockquote><p>Keenspot is now a festering assland of a turdhole powered by Chris Crosby&#39;s gravitational pull and haunted by the tortured souls of captured Internet cartoonists.</p></blockquote>

<p>I would wager that this was probably written by a member of the message boards, rather than an artist, but still. It&#39;s not totally obvious to an outsider exactly what was so egregious about Keenspot that caused talented artists to flee with that level of acrimony. But clearly something went very wrong.</p>

<p>In 2004, Keenspot moved its corporate headquarters in <a href="https://en.wikipedia.org/wiki/Cresbard,_South_Dakota">Cresbard, South Dakota</a>, population 121. The people of Cresbard had decided, due to its dwindling population, to close its high school. Keenspot bought the high school on the cheap, and as far as I can tell, Chris Crosby and his mom Teri moved to rural South Dakota to run Keenspot there. There is a <a href="http://superosity.keenspot.com/w/20051128.html">Superosity storyline about how terrible an idea moving to South Dakota was</a>, posted shortly after the blizzard that knocked Crosby out of the Daily Grind.</p>

<p>That&#39;s weird, right? Moving your internet media company to an abandoned high school in rural South Dakota? That&#39;s kind of an unhinged thing to do?</p>

<h2 id="blatant-comics-and-the-push-for-more-bobby">Blatant Comics and the push for More Bobby</h2>

<p>In 2006, Bobby Crosby launched two new comics, both now fully backed by Keenspot: <a href="https://web.archive.org/web/20090228232713/http://plusev.keenspot.com/d/20060811.html">+EV</a>, a comic strip about online poker, and <a href="http://lastblood.keenspot.com/">Last Blood</a>, a zombie-themed graphic novel serialized online on Keenspot, published in print by Blatant Comics, and which Bobby very explicitly wants, from day one, <a href="http://lastblood.keenspot.com/main/2006/12/25/last-blood-begins/">to turn into a movie</a>. Bobby is the writer for both of these comics, but this time is collaborating with artists. (Even Pupkin&#39;s most ardent fans would have to admit that its art is pretty amateurish.)</p>

<p>Wait. What&#39;s Blatant Comics?</p>

<p>In 2007, <a href="https://web.archive.org/web/20070426101656/http://www.blatantcomics.com/">Blatant Comics</a> had a handful of different books by a couple of different artists that they were advertising. They&#39;re... embarassing shlock. There&#39;s a superhero parody of The Office, there&#39;s Bobby&#39;s zombie book, there&#39;s “Impeach Bush! A Funny Li&#39;l Graphical Novel About The Worstest Pres&#39;dent In The History Of Forevar”, and there&#39;s “Dead Sonja”, which is a zombie parody of the Marvel hero “Red Sonja”, I guess? The company appears to be called “Blatant Comics” so that they can put “A Blatant Parody” on the cover. Their webpage displays their address in the footer... Cresbard, SD.</p>

<p><a href="https://en.wikipedia.org/wiki/Blatant_Comics">Blatant Comics has a Wikipedia page</a>, which I am gonna quote from:</p>

<blockquote><p>Blatant Comics is an independent American comic book publisher founded in 1997 by Chris Crosby... Blatant is known for publishing parody comic books such as Sloth Park, XXXena: Warrior Pornstar, and Dead Sonja: She-Zombie with a Sword...</p></blockquote>

<p>So prior to Keenspot, Chris Crosby wasn&#39;t just making webcomics – he was publishing print comics. Awful, pulpy, dumb parody print comics.</p>

<p>In 1997 Blatant Comics published something called “<a href="https://comicvine.gamespot.com/ebonix-files-1/4000-845592/">The EboniX-Files</a>” with taglines like “The truth be out there, g!” and “Yo, bust a cap in da future&#39;s ass!” and thought this joke was clever enough that they ALSO, published in the SAME BOOK, included “<a href="https://www.hipcomic.com/listing/ebonix-files-1-vf-nm-x-files-spoof-with-ungrammatical-ebonix-men-x-men-spoof/170043">The Ungrammatical EboniX-Men</a>”. Chris Crosby is credited as the writer.</p>

<p>I vaguely remember some shitty stereotypes of Black people in Superosity, and honestly it&#39;s possible they may have been as bad as this, I don&#39;t remember. But, yiiiiiiikes! Fuck this!</p>

<p>In 2006, it would appear that Chris was using his old print comics company, which hadn&#39;t published much of anything since Keenspot launched, to try to kickstart the career of his brother Bobby. He certainly wasn&#39;t using it to promote anyone else&#39;s Keenspot comic. Today, <a href="http://blatantcomics.com/">Blatant Comics&#39; website</a> is nothing but Bobby Crosby, and in fact looks startlingly similar to <a href="http://www.bobbycrosby.com/">bobbycrosby.com</a>.</p>

<p>Chris also launched a new Keenspot comic towards the end of 2006: “<a href="http://wickedpowered.keenspot.com/d/20061211.html">WICKEDPOWERED</a>”, a paid advertisement for “<a href="https://web.archive.org/web/20061214173438/https://www.wickedlasers.com/">a laser pointer with a range of 141 miles that can melt a garbage bag</a>.” WICKEDPOWERED ran for a year and a half before the sponsor pulled the plug.</p>

<p>It seems clear that by 2006, we are looking at a Keenspot that is not driven by a group of peers trying to create a new medium and support each other – it is being driven, more and more, by the Crosbys&#39; desire to cash in as hard as they possibly can.</p>

<h2 id="keenspot-becomes-the-crosby-show">Keenspot becomes The Crosby Show</h2>

<p>In March 2008, Chris and Teri Crosby buy Darren Bluel and Nate Stone&#39;s stake in Keenspot, leaving the Crosbys as the sole owners. <a href="http://comixtalk.com/crosbys_consolidate_control_keenspot_interview_chris_crosby/">Chris was interviewed by comixtalk.com about it</a>. I find this quote incredibly revealing:</p>

<blockquote><p>Eight years later, we failed to reach most of our goals.  I&#39;m hoping to turn that around, so that five or ten years later I can look back and be a little prouder of what Keenspot is.  We&#39;ll see.</p></blockquote>

<p>To me, Keenspot was absolutely at its most successful, its most magical, right at the start; all those talented artists, breaking free from the gatekeepers of newspaper comics syndicates, using new tools to find new audiences, producing work that couldn&#39;t be published any other way. What are the goals for Keenspot that they failed to reach?</p>

<p>Here&#39;s Chris <a href="https://web.archive.org/web/20091230180923/http://badwebcomics.wikidot.com/forum/t-157653/bobby-crosby-s-stuff">posting to the Bad Webcomics Wiki forums</a> in June 2009:</p>

<blockquote><p>I enjoy working on SORE THUMBS and WICKEDPOWERED, but they were written primarily for the money, not because they&#39;re the kind of thing I love to write. They both are purposefully created to be as dumb and pandering as possible. Heck, WICKEDPOWERED was a paid advertisement for a handheld laser manufacturer. And CROW SCARE is intended to be a SCI FI Channel original movie illustrated on cheap newsprint.</p>

<p>Yes, I haven&#39;t aimed very high thus far. Bobby&#39;s goal with everything but PUPKIN and +EV has been to write fantastic blockbuster movies in graphic novel form. Maybe I should try that…</p></blockquote>

<p>Someone replies:</p>

<blockquote><p>So you&#39;re basically admitting to being a hack that panders to the lowest common denominator for money?</p></blockquote>

<p>And Chris responds, simply:</p>

<blockquote><p>I like money.</p></blockquote>

<h2 id="keenspot-begins-to-implode">Keenspot begins to implode</h2>

<p>At this point in time, Keenspot has slowly gone from The Place For Quality Webcomics to an aging portal whose design has not significantly changed in 8 years, and whose creators are still on it largely out of inertia. AutoKeen has not been updated in any significant way – it hasn&#39;t been upgraded to do anything as basic and obvious as generating RSS feeds, for example. By 2009 <a href="https://en.wikipedia.org/wiki/Google_Reader">Google Reader</a> would have been in its heydey, and RSS feeds a perfect fit for daily comics. (<a href="https://web.archive.org/web/20101120210546/http://cgwiki.comicgenesis.com/index.php?title=AutoGenesis">AutoKeen would eventually be rewritten out of frustration</a> by an administrator of Keenspace around 2010, but didn&#39;t necessarily add much in the way of new features – just made it easier to diagnose when it failed.) The <a href="https://web.archive.org/web/20060101102858/https://www.mindfaucet.com/comicpress/">Comicpress Wordpress theme</a> has been out for years, making self-hosting comics with archives more accessible than ever. Comics are starting to take off on Tumblr and Twitter. As far as I can tell, Keenspot artists are still carefully naming their files, uploading them to FTP sites, and hand-tweaking bespoke HTML templates. Keenspot as a technology, in terms of the service it provides for creators, is completely stagnant and neglected.</p>

<p>July 2009 – Jodie Troutman, a fairly prominent cartoonist who had “graduated” from Keenspace to Keenspot, <a href="http://comixtalk.com/john_troutman_keenspot_update/">is fired from Keenspot for no particular reason that anyone wants to disclose</a>.  (CW: deadnaming in linked article) Nothing like this has happened before in Keenspot&#39;s history. Troutman&#39;s response:</p>

<blockquote><p>Though I believe my membership was terminated unjustly and through no fault of my own, I suspect I’ll be much better off without Keenspot, whose management I never really saw eye-to-eye with.  All my friends have had great success as indie webcomics, so I can only hope to follow in their footsteps.</p></blockquote>

<p>I see this sort of sentiment a lot – that Keenspot isn&#39;t really helping creators, and that publishing independently is a much better value proposition.</p>

<p>December 2009 – Kel McDonald is fired from Keenspot and <a href="https://dannysdomain.livejournal.com/240684.html">publicly airs some dirty laundry</a> about how unprofessional and basically useless Keenspot leadership has been towards its artists not named Crosby, and about how nobody is getting paid on time.</p>

<p>Teri Crosby responds in the comments with a <a href="https://dannysdomain.livejournal.com/240684.html?thread=2200620#t2200620">polite point-by-point rebuttal</a>, signing off with “Smiles”.</p>

<p>Bobby Crosby responds in the comments <a href="https://dannysdomain.livejournal.com/240684.html?thread=2211628#t2211628">accusing everyone of being liars, ranting about how tiny and insignificant Keenspot is, how anything it does for its creators is a gift and above and beyond what they deserve and that they should be grateful, and posits that Keenspot should just be shut down</a>.</p>

<blockquote><p>“To have a booth presence mismanaged year after year, as Keenspot&#39;s frequently is, is unacceptable this far down the road.”</p>

<p>Why??? Who cares??? What does it matter to you? Keenspot is a tiny company that shouldn&#39;t even have a booth at SDCC in the first place but does so anyway mostly just as a little bonus to its members, a little gift. Are you the same type of person who turns down a free gift because it&#39;s not nice enough for someone of your stature??? Who gives a fuck???</p>

<p>“We all know you guys have Keenspot on autopilot and are basically using it to fund Chris and Bobby&#39;s side projects.”</p>

<p>It&#39;s the OPPOSITE. Our projects keep Keenspot alive so even more time and money can be wasted on it because its owners for some reason love a bunch of people who mostly hate them because of your lies.</p>

<p>No one in my family lives in a school.</p></blockquote>

<p>There&#39;s lots going on in the comments but I would like to quote this section of <a href="https://dannysdomain.livejournal.com/240684.html?thread=2217516#t2217516">Scott Kurtz&#39;s reply to Bobby</a>:</p>

<blockquote><p>Blatant and Keenspot are these very disparate entities when it suits you and sister companies when it suits you. You own both companies. You took the Keenspot booth this year and gave half to Blatant.</p></blockquote>

<h2 id="the-keen-dream-is-dead">The Keen dream is dead</h2>

<p>Within days after Kel McDonald&#39;s firing, Keenspot posts new mandatory contracts that all of its artists will need to sign in six months, or else leave Keenspot. The terms of this contract are... unfavourable towards creators, compared to what came before. These contracts are not sent to creators directly but instead quietly dumped in a private forum post, leaving many in the dark. <a href="http://fleen.com/2009/12/22/some-days-i-feel-like-a-real-goddamn-journalist/">Webcomics blog Fleen published a detailed exposé</a>.</p>

<p>As one creator puts it:</p>

<blockquote><p>Every Keenspot member I’ve spoken to agrees that this is the Crosbys’ way of firing everyone without having to fire anyone, since trying to ditch Kel [McDonald] blew up in their faces.</p>

<p>The new contract is ridiculous, completely unreasonable, and they know that. It doesn’t just mandate a revenue split, but requires cartoonists to give up their domains, and the contracts are slated to last three to five years.</p></blockquote>

<p>Chris Crosby doesn&#39;t really disagree:</p>

<blockquote><p>As well as not inviting or accepting any new members, we may also politely decline existing members who decide to sign the new contract. We’ll be having long discussions with each interested creator (assuming there are any) in order to work out what’s mutually beneficial and what’s not. If Keenspot cannot bring something substantial to the table for the creator in question, we will stop working with them.</p>

<p>I had hoped Keenspot the webcomics collective and Keenspot the independent publishing concern could co-exist happily. But after two years [following a 2008 reorganizaton and the buyout of former partners] the resounding answer is no. Those two sides of Keenspot resent each other, and neither side is happy.</p>

<p>[G]oing forward our focus will be directed solely at properties we have a long-term investment in, which is primarily Crosby-produced comics and related projects. That’s what makes the most business sense for us as a company, and we make no apologies for it.</p></blockquote>

<h2 id="aftermath">Aftermath</h2>

<p><a href="https://comicvine.gamespot.com/chris-crosby/4040-66086/issues-cover/">Have a look at Chris Crosby&#39;s print comic credits</a>, paying attention to the stuff with the “Keenspot” banner on top. “Fartnite.” “Yang Gang.” “Barry Steakfries: From the Files of Jetpack Joyride.” Chris Crosby started his career selling trash and it appears that&#39;s how he&#39;s determined to end it.</p>

<p>There is a <a href="https://www.keenspot.com/">big list of comics on the Keenspot frontpage</a>; only the titles in bold are “currently updating”. 7 comics are marked in bold. One of them is Marry Me, which completed its run in 2008, but for obvious reasons has ads in every news box and site banner. Another is Head Trip, which hasn&#39;t updated since July 2017. A third is No Pink Ponies, which last updated March 2018.</p>

<p>In 2022, Keenspot is 4 comics.</p>

<h2 id="marry-me">Marry Me</h2>

<p>In February 2007, Bobby Crosby and artist Remi “Eisu” Mokhtar launch “Marry Me”, at the URL marrymemovie.com. The URL and the <a href="https://web.archive.org/web/20070218102654/http://www.marrymemovie.com/main/2007/02/14/page-1-ex-boyfriends/">commentary for the first page</a> makes it clear: this is a graphic novel for which Bobby will be adapting into a screenplay and trying to sell to Hollywood. The webcomic exists as an elaborate movie pitch.</p>

<p>You might be aware that “Marry Me” is now, in 2022, a big-budget Hollywood romcom, starring Jennifer Lopez and Owen Wilson. It took 15 years and the total implosion of Keenspot, a breathtaking, agonizing squandering of an enormous amount of actual talent in service of one unbearable asshole, but the motherfucker actually did it.</p>

<p>And yet. None of the marketing around this movie mentions the comic it was based on. Nobody going to see it knows Bobby Crosby&#39;s name, or Blatant Comics, or Keenspot. The Crosbys spent <em>16 years</em> pushing Bobby&#39;s work, and in the end of all that is this disposable rom-com; every creative decision made by everyone involved for the sole reason that they thought it would sell.</p>

<p>I wonder: is Chris Crosby able to look back and be proud of that?</p>

<p><a href="https://blog.information-superhighway.net/tag:webcomics" class="hashtag"><span>#</span><span class="p-category">webcomics</span></a> <a href="https://blog.information-superhighway.net/tag:essays" class="hashtag"><span>#</span><span class="p-category">essays</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/pow-zap-comics-on-the-internet-on-the-big-screen-biff</guid>
      <pubDate>Mon, 21 Feb 2022 19:32:48 +0000</pubDate>
    </item>
    <item>
      <title>Spellcaster</title>
      <link>https://blog.information-superhighway.net/spellcaster</link>
      <description>&lt;![CDATA[or, discovering and recovering a lost programming language over the course of a weekend&#xA;&#xA;I go down rabbit holes. One of the great pleasures of this dumb future we live in is that you can dig through the milk crates of our culture forever, following whatever interests you, and there is no bottom. My latest rabbit hole looked something like this:&#xA;&#xA;I have been spending a few minutes each day seeking out backgrounds for use on Google Meet calls at work that match my t-shirts.&#xA;I was wearing a t-shirt featuring some early computer comic art from a Beagle Bros catalogue, drawn by Robert Cavey&#xA;Started poking around at old Beagle Bros catalogues, settled on this background&#xA;Started poking around Apple II magazines for more Bob Cavey comics&#xA;Discovered the following advertisement:&#xA;An advertisement for Spellcaster&#xA;&#xA;Wait, hold on a minute. This kind of sounds exactly like the weird idealized programming environment I keep in my head. Obviously with serious restrictions and caveats, but, as a learning tool? They implemented this on an Apple II in 1984? How far exactly did they go?!--more--&#xA;&#xA;I mean, look at this:&#xA;&#xA;&#34;Everything a Spellcaster program does leaves marks on the screen. You watch all its inner workings in motion.&#34; Programming is frustratingly opaque - it can be incredibly difficult, especially for a beginner, to answer the question, &#34;why is the computer doing this?&#34; But for the purposes of a teaching language, why not put all of the program&#39;s state on the screen?&#xA;&#34;Debugging a Spellcaster program is easy, because you can stop it, make it back up to the mistake (while you watch), change it, and let it run forward again.&#34; I have been fighting to build this literal exact workflow for YEARS. Time travel debugging crossed with edit and continue??&#xA;&#34;Imagine an editor and interpreter so wed that every keystroke, as it is typed, is syntactically checked and executed, so you instantly see its effects. If you backspace, the program reconstructs its previous state -- even in the middle of conditions and loops.&#34; A special livecoding editor built for the tightest possible feedback loop, with no invalid states?&#xA;&#xA;So of course I went looking for disk images.&#xA;&#xA;There were no disk images. There were a couple of scanned magazines with the same ad. That was all I could turn up.&#xA;&#xA;I started asking around on the internet and a couple of people tracked down a few more references; a couple more short reviews in some magazines. The jackpot, though, was this reimplementation of Spellcaster written in Processing. The readme gave me names, and the names gave me email addresses.&#xA;&#xA;Long story short, I reached out to Scott Gilkeson, the original programmer of the C64 version, and it turned out that he had conveniently made disk images a few years ago. He also put me in touch with John Fairfield, the original designer and programmer of the Apple II version, who gave his blessing to share them. (The Apple II version is lost, as far as anyone knows.) There was also a PC version, written in C; nobody seems to know where that is, either. Within a day of reaching out, I had a copy.&#xA;&#xA;You can use Spellcaster now, right from your browser.&#xA;&#xA;div style=&#34;position: relative; padding-bottom: 75%;&#34;&#xA;  iframe src=&#34;https://archive.org/embed/c64spellcaster&#34; width=&#34;100%&#34; height=&#34;100%&#34; allow=&#34;autoplay&#34; allowfullscreen style=&#34;width: 100%; height: 100%; position: absolute; left: 0px; top: 0px;&#34;/iframe&#xA;/div&#xA;&#xA;A manual is also available. (Scott is seeing about creating a higher-quality scan, but what&#39;s there is much better than nothing.)&#xA;&#xA;John Fairfield would go on to cofound the company that produced the Rosetta Stone language learning software. Spellcaster is, in a deep way, also language learning software. There are some truly fascinating design decisions made, not only in the Spellcaster programming environment, as advertised, but also as a language.&#xA;&#xA;It is the only programming language I am aware of that is designed to be spoken. The way the editor works, you generally press keys corresponding to syllables, rather than letters. There are occasionally some mnemonic properties to those syllables, but they&#39;re pretty arbitrary; you end up writing words like NUBOBORIBOBOLIAKA. Presumably, in a group learning setting like a classroom, this means programs can be unambiguously talked about, down to the last instruction - an often overlooked but useful property, as anyone who has tried to read code off a slide can attest.&#xA;There is a base 4 numbering system that&#39;s used in a few different places; 4 of the 5 vowels are used as digits, and paired with a starting consonant to mean different things that are done with that number (MABO means &#34;repeat BO once&#34;, KA means &#34;set the pen to colour #1&#34;). The fifth vowel, U, is used to choose a random number between 0 and 3. Because there are four cardinal directions, and 16 colours, this means something like MURI (turn clockwise a random number of times) is a nice concise way to write &#34;point my pen in a random direction&#34;, and KUKU is a nice concise way to write &#34;choose a random colour&#34;. Again, designed to be spoken, but also designed to be concise.&#xA;You can have spaces in identifiers, but not the letter Z. Z is short for the ZIM syllable, which ends free text entry. Honestly? I kind of really love that tradeoff. A beginner is much more likely to run into the question of &#34;why can&#39;t I write two words here&#34; long before they ask &#34;why can&#39;t I write a Z&#34;.&#xA;You can also give a spell an empty name. I did this by accident but the manual says &#34;Well, that&#39;s the name that has no letters in it, and it&#39;s a perfectly good name.&#34; And indeed, it just works, to call it you just write TUZIM, a little like if there was a special function you could call in Lisp by writing ().&#xA;Reading just a little further into the manual, it turns out the spell with the empty name has an  actual, specific purpose! VUZIM means &#34;see if the user pressed a key, and if they did, run a spell corresponding to that key&#34;. So if the user hits the &#34;A&#34; key, it will run the A spell. So of course since you can&#39;t have a Z spell, if the user hits the &#34;Z&#34; key, the empty spell is called. Given that you do have to hit the &#34;Z&#34; key to create the spell, there is a certain strange elegance to it; I don&#39;t entirely know if I think it&#39;s a good_ design but it&#39;s sure not a choice I would have thought to make.&#xA;&#xA;#preservation #retrocomputing]]&gt;</description>
      <content:encoded><![CDATA[<h2 id="or-discovering-and-recovering-a-lost-programming-language-over-the-course-of-a-weekend">or, discovering and recovering a lost programming language over the course of a weekend</h2>

<p>I go down rabbit holes. One of the great pleasures of this dumb future we live in is that you can dig through the milk crates of our culture forever, following whatever interests you, and there is no bottom. My latest rabbit hole looked something like this:</p>
<ul><li>I have been spending a few minutes each day seeking out backgrounds for use on Google Meet calls at work that match my t-shirts.</li>
<li>I was wearing a t-shirt featuring <a href="https://www.8bittees.com/product/beagle-bros-peeks-poked-cat-t-shirt/">some early computer comic art from a Beagle Bros catalogue</a>, drawn by <a href="https://spindleyq.tumblr.com/post/157947163844/geez-the-kid-might-just-mean-it-this-timehes">Robert Cavey</a></li>
<li>Started poking around at old Beagle Bros catalogues, settled on <a href="https://cf.mastohost.com/v1/AUTH_91eb37814936490c95da7b85993cc2ff/gamemaking/media_attachments/files/105/322/379/207/021/777/original/839b6303a3ba01ed.png">this background</a></li>
<li>Started poking around Apple II magazines for <a href="https://cf.mastohost.com/v1/AUTH_91eb37814936490c95da7b85993cc2ff/gamemaking/media_attachments/files/105/322/791/927/028/545/original/6f410ce3c44947a0.png">more Bob Cavey comics</a></li>
<li>Discovered the following advertisement:
<img src="https://cf.mastohost.com/v1/AUTH_91eb37814936490c95da7b85993cc2ff/gamemaking/media_attachments/files/105/322/828/267/147/424/original/1f14d1f9a00e6078.png" alt="An advertisement for Spellcaster"></li></ul>

<p>Wait, hold on a minute. This kind of sounds <em>exactly</em> like the weird idealized programming environment I keep in my head. Obviously with serious restrictions and caveats, but, as a learning tool? They implemented this on an Apple II in 1984? How far exactly did they go?</p>

<p>I mean, look at this:</p>
<ul><li>“Everything a Spellcaster program does leaves marks on the screen. You watch all its inner workings in motion.” Programming is frustratingly opaque – it can be incredibly difficult, especially for a beginner, to answer the question, “why is the computer doing this?” But for the purposes of a teaching language, why not put all of the program&#39;s state on the screen?</li>
<li>“Debugging a Spellcaster program is easy, because you can stop it, make it back up to the mistake (while you watch), change it, and let it run forward again.” I have been fighting to build this literal exact workflow for YEARS. Time travel debugging crossed with edit and continue??</li>
<li>“Imagine an editor and interpreter so wed that every keystroke, as it is typed, is syntactically checked and executed, so you instantly see its effects. If you backspace, the program reconstructs its previous state — even in the middle of conditions and loops.” A special livecoding editor built for the tightest possible feedback loop, with no invalid states?</li></ul>

<p>So of course I went looking for disk images.</p>

<p>There were no disk images. There were a couple of scanned magazines with the same ad. That was all I could turn up.</p>

<p>I started asking around on the internet and a couple of people tracked down a few more references; a couple more short reviews in some magazines. The jackpot, though, was <a href="https://github.com/scottg521/spellcaster">this reimplementation of Spellcaster</a> written in Processing. The readme gave me names, and the names gave me email addresses.</p>

<p>Long story short, I reached out to Scott Gilkeson, the original programmer of the C64 version, and it turned out that he had conveniently made disk images a few years ago. He also put me in touch with John Fairfield, the original designer and programmer of the Apple II version, who gave his blessing to share them. (The Apple II version is lost, as far as anyone knows.) There was also a PC version, written in C; nobody seems to know where that is, either. Within a day of reaching out, I had a copy.</p>

<p>You can <a href="https://archive.org/details/c64_spellcaster">use Spellcaster now, right from your browser</a>.</p>

<div style="position: relative; padding-bottom: 75%;">
  <iframe src="https://archive.org/embed/c64_spellcaster" allowfullscreen="" style="width: 100%; height: 100%; position: absolute; left: 0px; top: 0px;"></iframe>
</div>

<p>A <a href="https://archive.org/details/spellcaster_manual">manual</a> is also available. (Scott is seeing about creating a higher-quality scan, but what&#39;s there is much better than nothing.)</p>

<p>John Fairfield would go on to cofound the company that produced the Rosetta Stone language learning software. Spellcaster is, in a deep way, also language learning software. There are some truly fascinating design decisions made, not only in the Spellcaster programming environment, as advertised, but also as a language.</p>
<ul><li>It is the only programming language I am aware of that is designed to be <em>spoken</em>. The way the editor works, you generally press keys corresponding to <em>syllables</em>, rather than letters. There are occasionally some mnemonic properties to those syllables, but they&#39;re pretty arbitrary; you end up writing words like <code>NUBOBORIBOBOLIAKA</code>. Presumably, in a group learning setting like a classroom, this means programs can be unambiguously talked about, down to the last instruction – an often overlooked but useful property, as anyone who has tried to read code off a slide can attest.</li>
<li>There is a base 4 numbering system that&#39;s used in a few different places; 4 of the 5 vowels are used as digits, and paired with a starting consonant to mean different things that are done with that number (<code>MABO</code> means “repeat BO once”, KA means “set the pen to colour #1”). The fifth vowel, U, is used to choose a random number between 0 and 3. Because there are four cardinal directions, and 16 colours, this means something like <code>MURI</code> (turn clockwise a random number of times) is a nice concise way to write “point my pen in a random direction”, and <code>KUKU</code> is a nice concise way to write “choose a random colour”. Again, designed to be spoken, but also designed to be <em>concise</em>.</li>
<li>You can have spaces in identifiers, but not the letter Z. Z is short for the <code>ZIM</code> syllable, which ends free text entry. Honestly? I kind of really love that tradeoff. A beginner is much more likely to run into the question of “why can&#39;t I write two words here” long before they ask “why can&#39;t I write a Z”.</li>
<li>You can also give a spell an <em>empty name</em>. I did this by accident but the manual says “Well, that&#39;s the name that has no letters in it, and it&#39;s a perfectly good name.” And indeed, it just works, to call it you just write <code>TUZIM</code>, a little like if there was a special function you could call in Lisp by writing <code>()</code>.</li>
<li>Reading just a little further into the manual, it turns out the spell with the empty name has an  actual, specific purpose! <code>VUZIM</code> means “see if the user pressed a key, and if they did, run a spell corresponding to that key”. So if the user hits the “A” key, it will run the <code>A</code> spell. So of course since you can&#39;t have a <code>Z</code> spell, if the user hits the “Z” key, the empty spell is called. Given that you do have to hit the “Z” key to create the spell, there is a certain strange elegance to it; I don&#39;t entirely know if I think it&#39;s a <em>good</em> design but it&#39;s sure not a choice I would have thought to make.</li></ul>

<p><a href="https://blog.information-superhighway.net/tag:preservation" class="hashtag"><span>#</span><span class="p-category">preservation</span></a> <a href="https://blog.information-superhighway.net/tag:retrocomputing" class="hashtag"><span>#</span><span class="p-category">retrocomputing</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/spellcaster</guid>
      <pubDate>Sat, 24 Jul 2021 02:27:29 +0000</pubDate>
    </item>
    <item>
      <title>Honeylisp: Livecoding the Apple II with 6502 assembly</title>
      <link>https://blog.information-superhighway.net/honeylisp-livecoding-the-apple-ii-with-6502-assembly</link>
      <description>&lt;![CDATA[Honeylisp is a programming environment for the Apple II which attempts to leverage the absurd computational horsepower of a modern laptop to make programming a 1mhz 8-bit processor more magical.&#xA;&#xA;Step 1: Write a 6502 assembler&#xA;&#xA;The Honeylisp assembler is written in Fennel, which is a Lisp dialect that compiles to Lua. My goal in writing it was that the input to the assembler would be simple lists, making Fennel into a super-powerful macro assembler basically for free. 6502 assembly is treated as simple data, a kind of embedded DSL that can trivially be generated by simple Fennel functions.&#xA;&#xA;Step 2: Integrate into a text editor&#xA;&#xA;The Honeylisp environment runs on top of lite, a very small and extensible programmer&#39;s text editor written in Lua. It&#39;s trivial to add new commands that trigger whatever kind of interactive build process I want. It&#39;s also straightforward to extend to create custom UI; I put together the bones of a simple tile editor in an evening. I ended up porting lite to the love2d runtime to allow me more flexibility in how I could build these custom editors.&#xA;&#xA;It is also a fairly straightforward matter to add hot code reloading, though it requires a bit of forethought to take full advantage of it. When I put together the tile editor, I could make changes to it, hit Alt-R, and those changes were immediately live. It&#39;s cool to be able to use your editor to live-edit your editor.&#xA;&#xA;Step 3: Integrate into an emulator&#xA;&#xA;Back in July, I came across this video of Dagen Brock talking about an Apple IIgs emulator project he was working on that had an interesting property. Instead of integrating a debugger into the emulator, he decided he would implement a socket-based debugging interface that anyone could write a front-end for. Any external program could use this to gain full control of the emulated computer - read and write memory, set registers, set breakpoints, anything.&#xA;&#xA;It turns out this idea kind of languished, and he never quite got to polishing up and releasing his debugger. But this emulator, GSPlus, exists, and it gave me the core idea at the heart of Honeylisp - if I can arbitrarily read and write memory, and I have the ability to augment my assembler, then I can do anything. I can take snapshots and jump back and forth between them, recovering from hard crashes. I can push code updates while the game is running; I just need to be able to specify an appropriate &#34;merge point&#34;. I can map variables from one version of the program to another, so even if data moves around in memory, I can compensate. I can put breakpoints on memory I know should be read-only, to catch wild pointer accesses. I can implement a REPL entirely on the PC side; assemble tiny snippets of code and trigger them. I can integrate with my tools, so making changes to a graphical tile in my editor instantly updates the screen of my emulated Apple II. The possibilities are truly vast.&#xA;&#xA;Step 4: Integrate with hardware&#xA;&#xA;One of the pretty unique features of the Apple II is that it comes with a built-in machine language monitor, complete with a mini assembler and disassembler. It turns out that this can trivially be controlled over a serial port to do things like... write arbitrary memory. So I can plug my laptop directly into a stock Apple II, type IN #1, and then run a single command to bootstrap my environment. In fact writing arbitrary memory in response to serial port commands is not very complicated, and many of the magical abilities I imagine for my environment can be done directly on real hardware.&#xA;&#xA;Step 5: Write a game&#xA;&#xA;There is no point in investing all of this effort into software development tools if you don&#39;t plan to develop any software. I intend to port my MS-DOS game Neut Tower to the Apple II using this system. Neut Tower was written using a hand-rolled Forth system; I&#39;ve built a simple Forth-like stack VM to allow for non-performance-critical code to be written compactly. Because my assembler is so easily extensible, I can also easily use simple Fennel code to generate VM bytecode, which means I can do without a huge amount of the overhead of a full Forth as all of the compiler-y bits live on my laptop. This VM can also be the basis of my interactive REPL, which will be much nicer to write than little assembly snippets.&#xA;&#xA;Step 6: Tell people about it&#xA;&#xA;It&#39;s fairly early stages for all of these steps - I have a good start in all of them, but there is still lots of work to be done. I&#39;m really excited about this project, though, and I want to talk about it! Hopefully this blog will be a useful place to do that. I&#39;m looking forward to continuing to share my progress.&#xA;&#xA;#lisp #honeylisp #retrocomputing #apple2 #neuttower]]&gt;</description>
      <content:encoded><![CDATA[<p>Honeylisp is a programming environment for the Apple II which attempts to leverage the absurd computational horsepower of a modern laptop to make programming a 1mhz 8-bit processor more magical.</p>

<h2 id="step-1-write-a-6502-assembler">Step 1: Write a 6502 assembler</h2>

<p>The Honeylisp assembler is written in <a href="https://fennel-lang.org/">Fennel</a>, which is a Lisp dialect that compiles to Lua. My goal in writing it was that the input to the assembler would be simple lists, making Fennel into a super-powerful macro assembler basically for free. 6502 assembly is treated as simple data, a kind of embedded DSL that can trivially be generated by simple Fennel functions.</p>

<h2 id="step-2-integrate-into-a-text-editor">Step 2: Integrate into a text editor</h2>

<p>The Honeylisp environment runs on top of <a href="https://github.com/rxi/lite">lite</a>, a very small and extensible programmer&#39;s text editor written in Lua. It&#39;s trivial to add new commands that trigger whatever kind of interactive build process I want. It&#39;s also straightforward to extend to create custom UI; I put together the bones of a simple tile editor in an evening. I ended up porting lite to the <a href="https://love2d.org/">love2d</a> runtime to allow me more flexibility in how I could build these custom editors.</p>

<p>It is also a fairly straightforward matter to add hot code reloading, though it requires a bit of forethought to take full advantage of it. When I put together the tile editor, I could make changes to it, hit <code>Alt-R</code>, and those changes were immediately live. It&#39;s cool to be able to use your editor to live-edit your editor.</p>

<h2 id="step-3-integrate-into-an-emulator">Step 3: Integrate into an emulator</h2>

<p>Back in July, I came across this video of <a href="https://www.youtube.com/watch?v=1LzCmpAanpE">Dagen Brock talking about an Apple IIgs emulator project</a> he was working on that had an interesting property. Instead of integrating a debugger into the emulator, he decided he would implement a socket-based debugging interface that anyone could write a front-end for. Any external program could use this to gain full control of the emulated computer – read and write memory, set registers, set breakpoints, anything.</p>

<p>It turns out this idea kind of languished, and he never quite got to polishing up and releasing his debugger. But this emulator, GSPlus, exists, and it gave me the core idea at the heart of Honeylisp – if I can arbitrarily read and write memory, and I have the ability to augment my assembler, then I can do <em>anything</em>. I can take snapshots and jump back and forth between them, recovering from hard crashes. I can push code updates while the game is running; I just need to be able to specify an appropriate “merge point”. I can map variables from one version of the program to another, so even if data moves around in memory, I can compensate. I can put breakpoints on memory I know should be read-only, to catch wild pointer accesses. I can implement a REPL entirely on the PC side; assemble tiny snippets of code and trigger them. I can integrate with my tools, so making changes to a graphical tile in my editor instantly updates the screen of my emulated Apple II. The possibilities are truly vast.</p>

<h2 id="step-4-integrate-with-hardware">Step 4: Integrate with hardware</h2>

<p>One of the pretty unique features of the Apple II is that it comes with a built-in machine language monitor, complete with a mini assembler and disassembler. It turns out that this can trivially be controlled over a serial port to do things like... write arbitrary memory. So I can plug my laptop directly into a stock Apple II, type <code>IN #1</code>, and then run a single command to bootstrap my environment. In fact writing arbitrary memory in response to serial port commands is not very complicated, and many of the magical abilities I imagine for my environment can be done directly on real hardware.</p>

<h2 id="step-5-write-a-game">Step 5: Write a game</h2>

<p>There is no point in investing all of this effort into software development tools if you don&#39;t plan to develop any software. I intend to port my MS-DOS game <a href="https://spindleyq.itch.io/neut-tower">Neut Tower</a> to the Apple II using this system. Neut Tower was written using a hand-rolled Forth system; I&#39;ve built a simple Forth-like stack VM to allow for non-performance-critical code to be written compactly. Because my assembler is so easily extensible, I can also easily use simple Fennel code to generate VM bytecode, which means I can do without a huge amount of the overhead of a full Forth as all of the compiler-y bits live on my laptop. This VM can also be the basis of my interactive REPL, which will be much nicer to write than little assembly snippets.</p>

<h2 id="step-6-tell-people-about-it">Step 6: Tell people about it</h2>

<p>It&#39;s fairly early stages for all of these steps – I have a good start in all of them, but there is still lots of work to be done. I&#39;m really excited about this project, though, and I want to talk about it! Hopefully this blog will be a useful place to do that. I&#39;m looking forward to continuing to share my progress.</p>

<p><a href="https://blog.information-superhighway.net/tag:lisp" class="hashtag"><span>#</span><span class="p-category">lisp</span></a> <a href="https://blog.information-superhighway.net/tag:honeylisp" class="hashtag"><span>#</span><span class="p-category">honeylisp</span></a> <a href="https://blog.information-superhighway.net/tag:retrocomputing" class="hashtag"><span>#</span><span class="p-category">retrocomputing</span></a> <a href="https://blog.information-superhighway.net/tag:apple2" class="hashtag"><span>#</span><span class="p-category">apple2</span></a> <a href="https://blog.information-superhighway.net/tag:neuttower" class="hashtag"><span>#</span><span class="p-category">neuttower</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/honeylisp-livecoding-the-apple-ii-with-6502-assembly</guid>
      <pubDate>Sun, 11 Oct 2020 05:30:41 +0000</pubDate>
    </item>
    <item>
      <title>Retrocomputing</title>
      <link>https://blog.information-superhighway.net/retrocomputing</link>
      <description>&lt;![CDATA[So I should probably have a blog post that I can point to about this whole retrocomputing project that I&#39;ve been up to the past year and a half.&#xA;&#xA;I wrote a game on an MS-DOS 286 PC, using only tools I built myself or tools that were available during the era where they were still selling 286 PCs. It&#39;s called Neut Tower and you can play it on your MS-DOS PC, in DOSBox, or in your browser. As part of this project, I implemented a Forth system, and built most of my game and its tools using it.&#xA;&#xA;My motivation at the start of the project was this: I was enjoying using my 286. I liked the single-tasking workflow; there were no distractions. I was downloading games and apps and it was fun! So I figured I&#39;d take the next step and write a little game or something.!--more--&#xA;&#xA;When I was a teenager, I had a 286, and I tried to learn low-level programming on it because my options were &#34;low-level programming&#34; and &#34;BASIC&#34;, and I had hit my limit with BASIC. Assembly might as well have been Martian to me, but I got a book about C, and I got a book about game programming, and I sort of got some stuff working. But mostly the stuff I tried to do myself from scratch, or port from other sources, didn&#39;t work, and I didn&#39;t know why. Eventually I also got access to a 486, and then a Pentium, and the internet, and djgpp and Allegro, and suddenly I had an embarrassment of nice graphics and sound libraries and tooling, segment:offset addressing didn&#39;t matter, and I never had to worry about trying to understand how Mode X worked ever again.&#xA;&#xA;Twentyish years later, I wanted to learn all the stuff that never quite clicked for me. I wanted to dig into how everything worked, to make sense of the tutorials that once baffled me. I wanted to really understand it all. So I started writing little prototypes, and pretty soon, yeah, I had a cool EGA graphics engine, with two way scrolling of a tilemap and 16x16 sprites drawn on top, running at a decent speed on actual hardware. Everything fell into place one tiny experiment at a time.&#xA;&#xA;With the hardware programming side of things, I learned that my teenage understanding hadn&#39;t really been all that far off the mark - my problems weren&#39;t so much that I didn&#39;t understand the tutorials and resources that were available to me, it was more that I was simply bad at debugging my beginner code, and didn&#39;t have the tools or the patience to fix it. With 20 years of professional programming experience under my belt, and a wealth of resources on the internet that explained how things worked in depth, this was no longer an issue.&#xA;&#xA;Then I started to write a game loop in C, and didn&#39;t really like it. I knew in the back of my head that, for what I wanted to do, I really wanted some kind of scripting language. And I remembered Forth existed. &#xA;&#xA;In my 20s, obsessed with both the world of programming languages and the world of embedded systems, it was inevitable that I would learn about Forth - it&#39;s a particularly uncommon blend of small and powerful, that could run directly on hardware, that people who loved it really loved. I&#39;d tried seriously to learn it but couldn&#39;t really wrap my head around it - the weird postfix syntax, the confusing levels of meta. Why could I not use IF statements at the REPL? How was I supposed to remember all these finicky rules? I filed it away as &#34;interesting, but not for me.&#34;&#xA;&#xA;This project was the perfect opportunity to revisit that evaluation. Forth fit the bill exactly - it was a tool that could be built quickly, using minimal resources, and made to do what I wanted, AND I already had a hazy half-remembered foundation from decades ago. I dove headfirst into it.&#xA;&#xA;Relearning Forth was an altogether different experience. It turned out that once I built one myself, I understood it completely. The design of Forth is to write as little code as you possibly can, to make the computer do only as much work as it needs to. When I had to write it all myself, I had to decide - is it worth building this language feature, or can I do without it? Usually I could do without it. Usually there was a tinier way to do it. The code that I had to write wasn&#39;t really all that much uglier or worse for it, once I got used to the constraints. And I had proven designs I could pilfer; there are lots of existing open-source Forth implementations to get inspiration from. There are guides for building Forth systems. Doing Forth is not learning an existing language set in stone, it is building a language to solve your problem, and sharing ideas about useful building blocks. Chuck Moore, the inventor of Forth, hated its standardization; thought the goal of portability was absurd, thought everyone should change it as they needed, to fit their problem. He is still trying out new ideas, rebuilding, simplifying, making a system uniquely his own.&#xA;&#xA;So why do I think all this is important enough to write about?&#xA;&#xA;When I was a kid, I had this persistent idea in my head, that computing was a skill I could work at, get better at, and that doing so would allow me to accomplish things that were impossible for me without it. &#34;Once I got good enough&#34;, I could make a computer game, by myself. I could draw the graphics, I could write the code, I could make the music, I could design it all. I could make it and I could put it out into the world and it would be mine, start to finish. Every time I learned something new about computers, got some new piece of software, I gained abilities. I could do things I couldn&#39;t do before. My vision of computer literacy is that everyone has this experience, that everyone can learn the skills they want, is provided with the tools they need, to make their imagination real.  I have never really let go of this idea.&#xA;&#xA;I&#39;m still trying to find ways to make it true, still trying to explore the different ways that computing can be empowering. Retrocomputing is one avenue for that - people in the past had a lot of good ideas that didn&#39;t catch on. And while emulators are wonderful, running them inside a modern computing system makes it harder to experience what using an old computing system really felt like.&#xA;&#xA;When I show people my setup, they are often curious about the qualitative difference between old tools and modern tools; it must be so much harder, right? And... for me, it&#39;s really not! I write bugs at about the same rate; I fix them at about the same rate. There are many things I can&#39;t do because of resource constraints, but that keeps the scope manageable and makes for an interesting challenge to find cool stuff I can do. The biggest thing I miss is having a second editor that I can use to look at &amp; edit code while my game is running -- I have often resorted to taking a photo of some code with my phone so I can read it while I have the game up.&#xA;&#xA;And I gain really valuable things from the constraints. The biggest thing is that there&#39;s no alt-tab away from the work - it&#39;s so much easier to focus without a web browser instantly at my fingertips. (I&#39;m procrastinating at work writing this right now!) The resource constraints mean I have to focus ruthlessly on solving the problems I have, not the problems I imagine I&#39;ll have - there&#39;s no perfect, elegant, general solution if I think hard enough, there&#39;s only adding things and cleaning up what I&#39;ve got, one small piece at a time. And I can take workflow seriously as one of those problems! When I&#39;m fed up with the tools that are available for DOS on a 286 (and this happened multiple times!), I make my own that work the way I want, and I&#39;m able to integrate them seamlessly into my engine. I&#39;m able to intentionally craft my environment to be comfortable. I&#39;m no artist, but multiple people have complimented my art - partly, the secret is that 16x16 sprites and tiles can only look so good with a fixed ugly 16-colour palette, so I&#39;m able to focus on broad colour and style choices. But really, if you put me into my ugly, limited pixel editor that&#39;s two pages of code but instantly shows me what my sprite looks like in my game, I will mess around until I&#39;m happy. Put me in front of Photoshop with 16 million colours and I will go crazy from decision fatigue; I&#39;ll avoid making more art, and I&#39;ll get myself stuck.&#xA;&#xA;So for me, the tradeoffs are incredibly worth it. I&#39;ve spent decades trying to make games as a hobby; I&#39;ve put out reams of junk - failed prototypes, bad joke games, quick jam games, failed engines, half-finished tools. I&#39;ve tried every way of making games that I can think of; coding engines from scratch, using Unity, Godot, Love2D, Klik &amp; Play, Game Maker, Twine, Construct, Adventure Game Studio, pygame, Allegro. Some approaches I&#39;ve had more success with than others, but I&#39;ve not ever been as happy with anything I&#39;ve made as I am with Neut Tower. Not as a retrocomputing exercise -- as a game.&#xA;&#xA;Neut Tower is done, for now, and I am taking a break from it. (Perhaps someday I will return to it to create the next two episodes.) I&#39;m quickly finding myself using all of these lessons and starting to build some tools for myself in Linux. I don&#39;t quite know what they&#39;ll turn into yet, but I&#39;m looking forward to finding out, one small piece at a time.&#xA;&#xA;#neuttower #retrocomputing #essays #forth]]&gt;</description>
      <content:encoded><![CDATA[<p>So I should probably have a blog post that I can point to about this whole retrocomputing project that I&#39;ve been up to the past year and a half.</p>

<p>I wrote a game on an MS-DOS 286 PC, using only tools I built myself or tools that were available during the era where they were still selling 286 PCs. It&#39;s called <a href="https://spindleyq.itch.io/neut-tower">Neut Tower</a> and you can play it on your MS-DOS PC, in DOSBox, or in your browser. As part of this project, I implemented a Forth system, and built most of my game and its tools using it.</p>

<p>My motivation at the start of the project was this: I was enjoying using my 286. I liked the single-tasking workflow; there were no distractions. I was downloading games and apps and it was fun! So I figured I&#39;d take the next step and write a little game or something.</p>

<p>When I was a teenager, I had a 286, and I tried to learn low-level programming on it because my options were “low-level programming” and “BASIC”, and I had hit my limit with BASIC. Assembly might as well have been Martian to me, but I got a book about C, and I got a book about game programming, and I sort of got some stuff working. But mostly the stuff I tried to do myself from scratch, or port from other sources, didn&#39;t work, and I didn&#39;t know why. Eventually I also got access to a 486, and then a Pentium, and the internet, and <a href="http://www.delorie.com/djgpp/">djgpp</a> and <a href="https://liballeg.org/readme.html">Allegro</a>, and suddenly I had an embarrassment of nice graphics and sound libraries and tooling, segment:offset addressing didn&#39;t matter, and I never had to worry about trying to understand how Mode X worked ever again.</p>

<p>Twentyish years later, I wanted to learn all the stuff that never quite clicked for me. I wanted to dig into how everything worked, to make sense of the tutorials that once baffled me. I wanted to really understand it all. So I started writing little prototypes, and pretty soon, yeah, I had a cool EGA graphics engine, with two way scrolling of a tilemap and 16x16 sprites drawn on top, running at a decent speed on actual hardware. Everything fell into place one tiny experiment at a time.</p>

<p>With the hardware programming side of things, I learned that my teenage understanding hadn&#39;t really been all that far off the mark – my problems weren&#39;t so much that I didn&#39;t understand the tutorials and resources that were available to me, it was more that I was simply bad at debugging my beginner code, and didn&#39;t have the tools or the patience to fix it. With 20 years of professional programming experience under my belt, and a wealth of resources on the internet that explained how things worked in depth, this was no longer an issue.</p>

<p>Then I started to write a game loop in C, and didn&#39;t really like it. I knew in the back of my head that, for what I wanted to do, I really wanted some kind of scripting language. And I remembered Forth existed.</p>

<p>In my 20s, obsessed with both the world of programming languages and the world of embedded systems, it was inevitable that I would learn about Forth – it&#39;s a particularly uncommon blend of small and powerful, that could run directly on hardware, that people who loved it <em>really</em> loved. I&#39;d tried seriously to learn it but couldn&#39;t really wrap my head around it – the weird postfix syntax, the confusing levels of meta. Why could I not use IF statements at the REPL? How was I supposed to remember all these finicky rules? I filed it away as “interesting, but not for me.”</p>

<p>This project was the perfect opportunity to revisit that evaluation. Forth fit the bill exactly – it was a tool that could be built quickly, using minimal resources, and made to do what I wanted, AND I already had a hazy half-remembered foundation from decades ago. I dove headfirst into it.</p>

<p>Relearning Forth was an altogether different experience. It turned out that once I built one myself, I understood it completely. The design of Forth is to write as little code as you possibly can, to make the computer do only as much work as it needs to. When I had to write it all myself, I had to decide – is it worth building this language feature, or can I do without it? Usually I could do without it. Usually there was a tinier way to do it. The code that I had to write wasn&#39;t really all that much uglier or worse for it, once I got used to the constraints. And I had proven designs I could pilfer; there are lots of existing open-source Forth implementations to get inspiration from. There are guides for building Forth systems. Doing Forth is not learning an existing language set in stone, it is building a language to solve your problem, and sharing ideas about useful building blocks. Chuck Moore, the inventor of Forth, hated its standardization; thought the goal of portability was absurd, thought everyone should change it as they needed, to fit their problem. He is still trying out new ideas, rebuilding, simplifying, making a system uniquely his own.</p>

<p>So why do I think all this is important enough to write about?</p>

<p>When I was a kid, I had this persistent idea in my head, that computing was a skill I could work at, get better at, and that doing so would allow me to accomplish things that were impossible for me without it. “Once I got good enough”, I could make a computer game, by myself. I could draw the graphics, I could write the code, I could make the music, I could design it all. I could make it and I could put it out into the world and it would be mine, start to finish. Every time I learned something new about computers, got some new piece of software, I gained abilities. I could do things I couldn&#39;t do before. My vision of computer literacy is that everyone has this experience, that everyone can learn the skills they want, is provided with the tools they need, to make their imagination real.  I have never really let go of this idea.</p>

<p>I&#39;m still trying to find ways to make it true, still trying to explore the different ways that computing can be empowering. Retrocomputing is one avenue for that – people in the past had a lot of good ideas that didn&#39;t catch on. And while emulators are wonderful, running them inside a modern computing system makes it harder to experience what using an old computing system really felt like.</p>

<p>When I show people my setup, they are often curious about the qualitative difference between old tools and modern tools; it must be so much harder, right? And... for me, it&#39;s really not! I write bugs at about the same rate; I fix them at about the same rate. There are many things I can&#39;t do because of resource constraints, but that keeps the scope manageable and makes for an interesting challenge to find cool stuff I <em>can</em> do. The biggest thing I miss is having a second editor that I can use to look at &amp; edit code while my game is running — I have often resorted to taking a photo of some code with my phone so I can read it while I have the game up.</p>

<p>And I gain really valuable things from the constraints. The biggest thing is that there&#39;s no alt-tab away from the work – it&#39;s so much easier to focus without a web browser instantly at my fingertips. (I&#39;m procrastinating at work writing this right now!) The resource constraints mean I have to focus ruthlessly on solving the problems I have, not the problems I imagine I&#39;ll have – there&#39;s no perfect, elegant, general solution if I think hard enough, there&#39;s only adding things and cleaning up what I&#39;ve got, one small piece at a time. And I can take workflow seriously as one of those problems! When I&#39;m fed up with the tools that are available for DOS on a 286 (and this happened multiple times!), I make my own that work the way I want, and I&#39;m able to integrate them seamlessly into my engine. I&#39;m able to intentionally craft my environment to be comfortable. I&#39;m no artist, but multiple people have complimented my art – partly, the secret is that 16x16 sprites and tiles can only look so good with a fixed ugly 16-colour palette, so I&#39;m able to focus on broad colour and style choices. But really, if you put me into my ugly, limited pixel editor that&#39;s two pages of code but instantly shows me what my sprite looks like in my game, I will mess around until I&#39;m happy. Put me in front of Photoshop with 16 million colours and I will go crazy from decision fatigue; I&#39;ll avoid making more art, and I&#39;ll get myself stuck.</p>

<p>So for me, the tradeoffs are incredibly worth it. I&#39;ve spent decades trying to make games as a hobby; I&#39;ve put out reams of junk – failed prototypes, bad joke games, quick jam games, failed engines, half-finished tools. I&#39;ve tried every way of making games that I can think of; coding engines from scratch, using Unity, Godot, Love2D, Klik &amp; Play, Game Maker, Twine, Construct, Adventure Game Studio, pygame, Allegro. Some approaches I&#39;ve had more success with than others, but I&#39;ve not ever been as happy with anything I&#39;ve made as I am with Neut Tower. Not as a retrocomputing exercise — as a game.</p>

<p>Neut Tower is done, for now, and I am taking a break from it. (Perhaps someday I will return to it to create the next two episodes.) I&#39;m quickly finding myself using all of these lessons and starting to build some tools for myself in Linux. I don&#39;t quite know what they&#39;ll turn into yet, but I&#39;m looking forward to finding out, one small piece at a time.</p>

<p><a href="https://blog.information-superhighway.net/tag:neuttower" class="hashtag"><span>#</span><span class="p-category">neuttower</span></a> <a href="https://blog.information-superhighway.net/tag:retrocomputing" class="hashtag"><span>#</span><span class="p-category">retrocomputing</span></a> <a href="https://blog.information-superhighway.net/tag:essays" class="hashtag"><span>#</span><span class="p-category">essays</span></a> <a href="https://blog.information-superhighway.net/tag:forth" class="hashtag"><span>#</span><span class="p-category">forth</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/retrocomputing</guid>
      <pubDate>Wed, 13 May 2020 21:01:42 +0000</pubDate>
    </item>
    <item>
      <title>Data is code</title>
      <link>https://blog.information-superhighway.net/data-is-code</link>
      <description>&lt;![CDATA[I&#39;ve been seriously writing Forth, with my homebrew Forth dialect, for about a year now, off and on, and I&#39;ve noticed something interesting with how things end up structured.&#xA;&#xA;Forth and Lisp are often spoken of as though they are similar in some deep way. In Lisp circles, you often hear &#34;code is data.&#34; This is generally held to mean &#34;Lisp has macros&#34;, more or less - a Lisp program&#39;s source code is a syntax tree made of Lisp lists, that your Lisp program can introspect into and transform into new syntax trees and execute. Your program is literally a data structure.&#xA;&#xA;My Forth code has very few things I would refer to as &#34;data structures&#34;. There is no significant language for defining them - I write one-off words that do pointer arithmetic. I only have a handful, so I haven&#39;t felt the need to generalize. It does zero transformation of them - they have been carefully chosen to be directly useful for everything the program needs them for, in-place, as-is.&#xA;&#xA;Instead, the common pattern is that everything is code, which, thanks to Forth&#39;s flexible non-syntax, can be made to look a lot like data. Often data is compiled directly into the code that uses it - instead of naming a constant that&#39;s passed into a function to do a particular thing, you name a function that takes no arguments that just does the thing. (There are lots of flexible ways to make this sort of thing easy and inexpensive in Forth.) Forth is hyper-imperative to a degree that, as modern programmers, we&#39;ve largely forgotten is even possible. Even, say, the number 4 is arguably a word executed for its side effects (push the value 4 onto the current stack). Of course, this is how CPUs work, too - you don&#39;t have a concept of &#34;4&#34; on its own in assembly, you have the concept of moving &#34;4&#34; into a register, or into memory. The only thing you can tell a CPU is to do things. Forth is the same.!--more--&#xA;&#xA;One consequence is that a Forth word that represents a constant is invoked in exactly the same way as a word that makes decisions. What this means is that it is virtually impossible to write yourself into a corner by &#34;hard-coding&#34; something. You can start with the most direct implementation, and expand it into something more flexible as you need to. I often find myself turning a word that was very static into something dynamic, and not having to change any of the code that depends on it. And my Forth has developed lots of facilities for sophisticated decision-making and dispatch. It turns out that most sophisticated decision-making is largely just indirection, and is easy to accomplish even in extremely resource-constrained environments. Many things I used to think of as modern, expensive conveniences - anonymous functions! polymorphism! green threads! - are actually extremely cheap and simple to build, they just... don&#39;t exist in C.&#xA;&#xA;In &#34;Programming a Problem-Oriented Language&#34;, Chuck Moore defines &#34;input&#34; as &#34;...information that controls a program.&#34; Forth and Lisp share the idea that, most of the time, it&#39;s more powerful and flexible to use the language&#39;s parser to read a program&#39;s input. Before JSON, there was the s-expression, the universal data structure, and in Lisp, you usually are either using macros to turn that data into code directly, or writing an interpreter for that data. You can often think of a Lisp program as a collection of small, domain-specific virtual machines.&#xA;&#xA;However, Forth doesn&#39;t really have a parser; it has a tokenizer, a symbol table, an interpreter, and a virtual machine. Parsing Forth and executing Forth are synonymous; hell, compiling Forth and executing Forth are synonymous. Forth says you don&#39;t need a domain-specific virtual machine; you already have a perfectly good machine right here! Why not just solve your problem directly, right now? &#xA;&#xA;You may need sophisticated abstractions to succinctly describe the logic of how your problem is solved, and writing good Forth code is all about investing in those. But Forth makes an argument that most of the data that your program deals with is actually about controlling what your program should do, and making decisions about what your program should do is the job of code.&#xA;&#xA;There are drawbacks to this approach, of course; plenty of things that are inconvenient to express as text, plenty of times I wished I had a &#34;live&#34; data structure I could update on the fly and persist while my program is running, rather than having to exit my program and update my code. But if you can work within the constraints, there is enormous flexibility in it. I&#39;m writing a puzzle game, and while I have a terse vocabulary for defining levels, it&#39;s also trivial for me to add little custom setpieces to a given level, to throw in dialogue in response to weird events, to add weird constraints that only apply in that space, because at every step, I have the full power of the language at my disposal. If I&#39;d taken a data-driven approach, I would have needed to plan everything in advance, to design my little problem-oriented VM and and hope I thought of everything I needed. But with a code-first approach, I can be much more exploratory - try to build things, and if they work well, factor them out to be used more generally. Architecture arises naturally from need, as I build.&#xA;&#xA;#forth #essays]]&gt;</description>
      <content:encoded><![CDATA[<p>I&#39;ve been seriously writing Forth, with my homebrew Forth dialect, for about a year now, off and on, and I&#39;ve noticed something interesting with how things end up structured.</p>

<p>Forth and Lisp are often spoken of as though they are similar in some deep way. In Lisp circles, you often hear “code is data.” This is generally held to mean “Lisp has macros”, more or less – a Lisp program&#39;s source code is a syntax tree made of Lisp lists, that your Lisp program can introspect into and transform into new syntax trees and execute. Your program is literally a data structure.</p>

<p>My Forth code has very few things I would refer to as “data structures”. There is no significant language for defining them – I write one-off words that do pointer arithmetic. I only have a handful, so I haven&#39;t felt the need to generalize. It does zero transformation of them – they have been carefully chosen to be directly useful for everything the program needs them for, in-place, as-is.</p>

<p>Instead, the common pattern is that everything is code, which, thanks to Forth&#39;s flexible non-syntax, can be made to <em>look</em> a lot like data. Often data is compiled directly into the code that uses it – instead of naming a constant that&#39;s passed into a function to do a particular thing, you name a function that takes no arguments that just <em>does</em> the thing. (There are lots of flexible ways to make this sort of thing easy and inexpensive in Forth.) Forth is hyper-imperative to a degree that, as modern programmers, we&#39;ve largely forgotten is even possible. Even, say, the number 4 is arguably a word executed for its side effects (push the value 4 onto the current stack). Of course, this is how CPUs work, too – you don&#39;t have a concept of “4” on its own in assembly, you have the concept of moving “4” into a register, or into memory. The only thing you can tell a CPU is to do things. Forth is the same.</p>

<p>One consequence is that a Forth word that represents a constant is invoked in exactly the same way as a word that makes decisions. What this means is that it is virtually impossible to write yourself into a corner by “hard-coding” something. You can start with the most direct implementation, and expand it into something more flexible as you need to. I often find myself turning a word that was very static into something dynamic, and not having to change any of the code that depends on it. And my Forth has developed lots of facilities for sophisticated decision-making and dispatch. It turns out that most sophisticated decision-making is largely just indirection, and is easy to accomplish even in extremely resource-constrained environments. Many things I used to think of as modern, expensive conveniences – anonymous functions! polymorphism! green threads! – are actually extremely cheap and simple to build, they just... don&#39;t exist in C.</p>

<p>In “Programming a Problem-Oriented Language”, Chuck Moore defines “input” as “...<a href="https://colorforth.github.io/POL.htm">information that controls a program</a>.” Forth and Lisp share the idea that, most of the time, it&#39;s more powerful and flexible to use the language&#39;s parser to read a program&#39;s input. Before JSON, there was the s-expression, the universal data structure, and in Lisp, you usually are either using macros to turn that data into code directly, or writing an interpreter for that data. You can often think of a Lisp program as a collection of small, domain-specific virtual machines.</p>

<p>However, Forth doesn&#39;t really have a parser; it has a tokenizer, a symbol table, an interpreter, and a virtual machine. Parsing Forth and executing Forth are synonymous; hell, <em>compiling</em> Forth and executing Forth are synonymous. Forth says you don&#39;t need a domain-specific virtual machine; you already have a perfectly good machine right here! Why not just solve your problem directly, right now?</p>

<p>You may need sophisticated <em>abstractions</em> to succinctly describe the logic of how your problem is solved, and writing good Forth code is all about investing in those. But Forth makes an argument that most of the data that your program deals with is actually about controlling what your program should do, and making decisions about what your program should do is the job of code.</p>

<p>There are drawbacks to this approach, of course; plenty of things that are inconvenient to express as text, plenty of times I wished I had a “live” data structure I could update on the fly and persist while my program is running, rather than having to exit my program and update my code. But if you can work within the constraints, there is enormous flexibility in it. I&#39;m writing a puzzle game, and while I have a terse vocabulary for defining levels, it&#39;s also trivial for me to add little custom setpieces to a given level, to throw in dialogue in response to weird events, to add weird constraints that only apply in that space, because at every step, I have the full power of the language at my disposal. If I&#39;d taken a data-driven approach, I would have needed to plan everything in advance, to design my little problem-oriented VM and and hope I thought of everything I needed. But with a code-first approach, I can be much more exploratory – try to build things, and if they work well, factor them out to be used more generally. Architecture arises naturally from need, as I build.</p>

<p><a href="https://blog.information-superhighway.net/tag:forth" class="hashtag"><span>#</span><span class="p-category">forth</span></a> <a href="https://blog.information-superhighway.net/tag:essays" class="hashtag"><span>#</span><span class="p-category">essays</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/data-is-code</guid>
      <pubDate>Fri, 01 May 2020 02:24:32 +0000</pubDate>
    </item>
    <item>
      <title>What the hell is Forth?</title>
      <link>https://blog.information-superhighway.net/what-the-hell-is-forth</link>
      <description>&lt;![CDATA[Forth is perhaps the tiniest possible useful interactive programming language.  It is tiny along a number of dimensions:&#xA;&#xA;The amount of code required to implement it&#xA;The size of the code that is generated&#xA;The amount of memory used&#xA;The number of features it considers necessary for useful work&#xA;&#xA;It is a language that makes complexity painful, but which reveals that a surprising amount can be accomplished without introducing any. Forth is the opposite of &#34;bloat&#34;. If you&#39;ve ever been like &#34;Oh my God this Electron-based chat app is taking up 10% of my CPU at idle, what the HELL is it DOING, modern computing has gone MAD&#34;, Forth is there to tell you that computing went mad decades ago, and that programs could be doing SO MUCH MORE with SO MUCH LESS.&#xA;&#xA;WHAT DO YOU MEAN, &#34;FORTH&#34;&#xA;&#xA;There is an expression about Forth: &#34;If you&#39;ve seen one Forth, you&#39;ve seen one Forth.&#34; Forth isn&#39;t a strictly-defined language, though there is a standardized dialect; it&#39;s more a set of ideas that tend to work well together.&#xA;&#xA;In the past month, I wrote a tiny Forth system on a 286 running MS-DOS using Turbo C++ 1.01. It is my first time using Forth in anger, though I read a lot about it 15 years ago. When I refer to my Forth, I am referring to a system literally thrown together in two weeks, written by someone who does not really know Forth that well. It is slow and wildly nonstandard and it doesn&#39;t do very much, but I have enjoyed the process of writing it very much. If you are a grizzled old Forth grognard, please let me know if I have misrepresented anything.!--more--&#xA;&#xA;WHAT DOES FORTH NOT DO&#xA;&#xA;Here is an incomplete list of things you may take for granted as a programmer that Forth, in its purest form, generally considers unnecessary waste:&#xA;&#xA;Garbage collection&#xA;Dynamic memory allocation&#xA;Garbage&#xA;Memory safety&#xA;Static types&#xA;Dynamic types&#xA;Objects&#xA;Polymorphic methods&#xA;Closures&#xA;Lexical scoping&#xA;The concept of global variables being in any way &#34;bad&#34;&#xA;Local variables&#xA;The ability to write &#34;IF&#34; statements at the REPL&#xA;&#xA;Most or all of these can be added to the language - the Forth standard, ANS Forth, specifies words for dynamic memory allocation and local variables. There are lots of object systems that people have built on top of Forth. Forth is a flexible medium, if you&#39;re willing to put in the work. &#xA;&#xA;But the inventor of Forth, Chuck Moore, literally said, in 1999: &#34;I remain adamant that local variables are not only useless, they are harmful.&#34; In the Forth philosophy, needing to use local variables is a sign that you have not simplified the problem enough; that you should restructure things so that the meaning is clear without them.&#xA;&#xA;WHAT DOES FORTH LOOK LIKE&#xA;&#xA;A core part of Forth is that all functions, or &#34;words&#34; in Forth terminology, operate on &#34;the stack&#34;. Words take arguments from the stack, and return their results on the stack.  There are a handful of primitive built-in words that do no useful work besides manipulating the stack.&#xA;&#xA;What this means is that writing an expression tree as Forth code ends up turning into postfix notation. (1 + 2)  (3 - 4) becomes 1 2 + 3 4 - . Writing a number in Forth means &#34;push that number onto the stack&#34;.&#xA;&#xA;Forth syntax is, with a few exceptions, radically, stupefyingly simple: Everything that&#39;s not whitespace is a word. Once the interpreter has found a word, it looks it up in the global dictionary, and if it has an entry, it executes it. If it doesn&#39;t have an entry, the interpreter tries to parse it as a number; if that works, it pushes that number on the stack. If it&#39;s not a number either, it prints out an error and pushes on.&#xA;&#xA;Oops, I meant to describe the syntax but instead I wrote down the entire interpreter semantics, because it fits in three sentences.&#xA;&#xA;The exception to the &#34;whatever is not whitespace is a word&#34; rule is that the interpreter is not the only piece of Forth code that can consume input. For example, ( is a word that reads input and discards it until it finds a ) character. That&#39;s how comments work - the interpreter sees the ( with a space after it, runs the word, and then the next character it looks at is after the comment has ended. You can trivially define ( in one line of Forth.&#xA;&#xA;WHY THE HELL WOULD I USE THAT&#xA;&#xA;There are practical reasons:&#xA;&#xA;You need something tiny and reasonably powerful, and you don&#39;t care about memory safety&#xA;I&#39;m not sure I can think of any others&#xA;&#xA;And there are intangible reasons:&#xA;&#xA;Implementing a programming language that fits into a few kilobytes of RAM, that you understand every line of, that you can build one piece at a time and extend infinitely, makes you feel like a god-damn all-powerful wizard&#xA;&#xA;Part of the mystique of Forth is that you can get very metacircular with it - control flow words like IF and FOR are implemented in Forth, not part of the compiler/interpreter. So are comments, and string literals. The compiler/interpreter itself is usually, in some way, written in Forth. It turns out that you can discard virtually every creature comfort of modern programming and still end up with a useful language that is extensible in whatever direction you choose to put effort into.&#xA;&#xA;Forth enters that rarefied pantheon of languages where the interpreter is, like, half a page of code, written in itself. In many ways it&#39;s kind of like a weird backwards lisp with no parentheses. And it can be made to run on the tiniest hardware!&#xA;&#xA;The mental model for bootstrapping a Forth system goes something like:&#xA;&#xA;Write primitive words in assembly - this includes the complete Forth &#34;VM&#34;, as distinct from the Forth language interpreter/compiler. The set of built-in words can be very, very small - in the document &#34;eForth Overview&#34; by C. H. Ting, which I have seen recommended as an excellent deep-dive into the details of how to build a Forth environment, Ting states that his system is built with 31 &#34;primitive&#34; words written in assembly. &#xA;Hand-assemble &#34;VM bytecode&#34; for the interpreter/compiler and required dependencies - because of the extreme simplicity of the VM, you can generally program your macro assembler to do this job, and so this can meaningfully resemble the act of simply writing Forth code directly&#xA;Write all new words using the interpreter/compiler you just got running&#xA;&#xA;I say &#34;interpreter/compiler&#34; and not &#34;interpreter and compiler&#34; because they are literally mixed together; there is a global flag that determines whether the interpreter is in &#34;compile mode&#34; or not. It is done this way because it turns out that if you add the ability to mark a word as &#34;always interpret, even in compile mode&#34;, you have added the ability to extend the compiler in arbitrary ways.&#xA;&#xA;WHAT SUCKS ABOUT WRITING FORTH&#xA;&#xA;Any word that takes more than two or three parameters is a nightmare to read or write&#xA;Right now in my codebase I have a word that uses two global variables because I cannot deal with juggling all of the values on the stack. This word is absolutely not re-entrant and at some point I&#39;m going to need to rewrite it so that it is, and I am not looking forward to it. If I had local variables, it would be substantially less of a problem. But there&#39;s also part of me that thinks there must be some way to rewrite it to be simpler that I haven&#39;t figured out yet.&#xA;&#xA;There&#39;s another word in my codebase that takes 4 or 5 parameters that I managed to write by breaking it up into, like, 8 smaller words, over the course of writing / rewriting for like an hour or two. I felt pretty proud when I finally got it working, but honestly I think it would have been pretty trivial to write in C with local variables. I miss them.&#xA;&#xA;Shit crashes&#xA;Remember the part about no memory safety? Yeah, there&#39;s all kinds of ways a wayward Forth system can go wrong. I forgot a DROP once in a frequently-used word and my computer hard-locked when the stack overflowed. (To be fair: my computer was a 286 running MS-DOS, so I was already in a situation where programming it meant rebooting it when I inevitably fucked something up.)&#xA;&#xA;Nonexistent error messages&#xA;The only error message my Forth system has is, if it doesn&#39;t recognize the word &#34;foo&#34;, it prints &#34;foo?&#34;  If, for example, I write an IF statement, but forget to end it with THEN, I don&#39;t get a compile error, I get -- you guessed it -- a runtime hard crash.&#xA;&#xA;WHAT RULES ABOUT WRITING FORTH&#xA;&#xA;It&#39;s compact as hell&#xA;The majority of words I write are literally one line of code. They do a small job and get out.&#xA;&#xA;It&#39;s direct as hell&#xA;Building abstractions in Forth is... different than building abstractions in other languages.  It&#39;s still a really core, important thing, but as building complex / expensive code is so much work, stacking expensive abstractions on top of each other is not really tenable. So you&#39;re left with very basic building blocks to do your job as straightforwardly as possible.&#xA;&#xA;You are absolutely empowered to fix any problems with your particular workflow and environment&#xA;People turn Forth systems into tiny OSes, complete with text editors, and I absolutely did not understand this impulse until I wrote my own. The Forth interpreter is an interactive commandline, and you can absolutely make it your own. Early on I wrote a decompiler, because it was easy. It&#39;s like half a screen of code. There are some cases it falls down on, but I wrote it in like a half hour and it works well enough for what I need.&#xA;&#xA;Everything is tiny and easy to change or extend&#xA;Remember when I said I wrote a decompiler because it was easy? Other things I changed in an evening or two:&#xA;&#xA;Added co-operative multitasking (green threads)&#xA;Custom I/O overrides, so my interactive REPL sessions could be saved to disk&#xA;Rewrote the core interpreter loop in Forth&#xA;Rewrote the VM loop to not use the C stack&#xA;Instrumenting the VM with debug output to catch a crash bug&#xA;&#xA;One of the things on my todo list is a basic interactive step-through debugger, which I suspect I&#39;ll be able to get basically up and running within, like, an hour or two? When things stay tiny and simple, you don&#39;t worry too much about changing them to make them better, you just do it.&#xA;&#xA;If you have ever wanted an assembly code REPL, this is about as close as you&#39;re going to get&#xA;Forth is a dynamic language in which the only type is &#34;a 16-bit number&#34; and you can do whatever the fuck you want with that number. This is dangerous as hell, of course, but if you are writing code that has no chance of having to handle arbitrary adversarial input from the internet (like my aforementioned MS-DOS 286), it is surprising how refreshing and fun this is.&#xA;&#xA;THIS SOUNDS INTERESTING, WHAT IS THE BEST WAY TO LEARN MORE&#xA;&#xA;I honestly do not know if there is a better way to understand Forth than just trying to build your own, and referring to other Forth implementations and documents when you get stuck. It&#39;s been my experience that they just don&#39;t make sense until you&#39;re neck deep into it.  And it&#39;s tiny enough that you feel good about throwing away pieces that aren&#39;t working once you understand what does work.&#xA;&#xA;I&#39;ve found the process of writing my own Forth and working within its constraints to be far more rewarding than any time I have tried working with existing Forths, even if on occasion I have wished for more complex functionality than I&#39;m willing to build on my own.&#xA;&#xA;WHAT HAVE I LEARNED FROM ALL THIS&#xA;&#xA;I&#39;m very interested in alternate visions of what computing can look like, and who it can be for. Forth has some very interesting ideas embedded in it:&#xA;&#xA;A system does not have to be complex to be flexible, extensible, and customizable&#xA;A single person should be able to understand a computing system in its entirety, so that they can change it to fit their needs&#xA;&#xA;I find myself wondering a lot what a more accessible Forth might look like; are there more flexible, composable, simple abstractions like the Forth &#34;word&#34; out there? Our current GUI paradigms can&#39;t be irreducible in complexity; is there a radically simpler alternative that empowers individuals? What else could an individual-scale programming language look like, that is not only designed to enable simplicity, but to outright disallow complexity? &#xA;&#xA;Forth is a radical language because it does not &#34;scale up&#34;; you cannot build a huge system in it that no one person understands and expect it to work. Most systems I have used that don&#39;t scale up - Klik &amp; Play, Hypercard, Scratch, that sort of thing - are designed for accessibility. Forth is not; it&#39;s designed for leverage. That&#39;s an interesting design space I wasn&#39;t even really aware of.&#xA;&#xA;The lesson that implementing abstractions as directly as possible enables you to more easily change them is a useful one. And the experience of succeeding in building a programming environment from scratch on an underpowered computer in a couple of weeks is something I will bring with me to other stalled projects - you can sit down for a couple of hours, radically simplify, make progress, and learn.&#xA;&#xA;#forth #retrocomputing #essays]]&gt;</description>
      <content:encoded><![CDATA[<p>Forth is perhaps the tiniest possible useful interactive programming language.  It is tiny along a number of dimensions:</p>
<ul><li>The amount of code required to implement it</li>
<li>The size of the code that is generated</li>
<li>The amount of memory used</li>
<li>The number of features it considers necessary for useful work</li></ul>

<p>It is a language that makes complexity painful, but which reveals that a surprising amount can be accomplished without introducing any. Forth is the opposite of “bloat”. If you&#39;ve ever been like “Oh my God this Electron-based chat app is taking up 10% of my CPU at idle, what the HELL is it DOING, modern computing has gone MAD”, Forth is there to tell you that computing went mad decades ago, and that programs could be doing SO MUCH MORE with SO MUCH LESS.</p>

<h2 id="what-do-you-mean-forth">WHAT DO YOU MEAN, “FORTH”</h2>

<p>There is an expression about Forth: “If you&#39;ve seen one Forth, you&#39;ve seen one Forth.” Forth isn&#39;t a strictly-defined language, though there is a standardized dialect; it&#39;s more a set of ideas that tend to work well together.</p>

<p>In the past month, I wrote a tiny Forth system on a 286 running MS-DOS using Turbo C++ 1.01. It is my first time using Forth in anger, though I read a lot about it 15 years ago. When I refer to my Forth, I am referring to a system literally thrown together in two weeks, written by someone who does not really know Forth that well. It is slow and wildly nonstandard and it doesn&#39;t do very much, but I have enjoyed the process of writing it very much. If you are a grizzled old Forth grognard, please let me know if I have misrepresented anything.</p>

<h2 id="what-does-forth-not-do">WHAT DOES FORTH NOT DO</h2>

<p>Here is an incomplete list of things you may take for granted as a programmer that Forth, in its purest form, generally considers unnecessary waste:</p>
<ul><li>Garbage collection</li>
<li>Dynamic memory allocation</li>
<li>Garbage</li>
<li>Memory safety</li>
<li>Static types</li>
<li>Dynamic types</li>
<li>Objects</li>
<li>Polymorphic methods</li>
<li>Closures</li>
<li>Lexical scoping</li>
<li>The concept of global variables being in any way “bad”</li>
<li>Local variables</li>
<li>The ability to write “IF” statements at the REPL</li></ul>

<p>Most or all of these <em>can</em> be added to the language – the Forth standard, ANS Forth, specifies words for dynamic memory allocation and local variables. There are lots of object systems that people have built on top of Forth. Forth is a flexible medium, if you&#39;re willing to put in the work.</p>

<p>But the inventor of Forth, <a href="http://www.ultratechnology.com/1xforth.htm">Chuck Moore, literally said, in <em>1999</em></a>: “I remain adamant that local variables are not only useless, they are harmful.” In the Forth philosophy, <em>needing to use local variables</em> is a sign that you have not simplified the problem enough; that you should restructure things so that the meaning is clear without them.</p>

<h2 id="what-does-forth-look-like">WHAT DOES FORTH LOOK LIKE</h2>

<p>A core part of Forth is that all functions, or “words” in Forth terminology, operate on “the stack”. Words take arguments from the stack, and return their results on the stack.  There are a handful of primitive built-in words that do no useful work besides manipulating the stack.</p>

<p>What this means is that writing an expression tree as Forth code ends up turning into postfix notation. <code>(1 + 2) * (3 - 4)</code> becomes <code>1 2 + 3 4 - *</code>. Writing a number in Forth means “push that number onto the stack”.</p>

<p>Forth syntax is, with a few exceptions, radically, stupefyingly simple: Everything that&#39;s not whitespace is a word. Once the interpreter has found a word, it looks it up in the global dictionary, and if it has an entry, it executes it. If it doesn&#39;t have an entry, the interpreter tries to parse it as a number; if that works, it pushes that number on the stack. If it&#39;s not a number either, it prints out an error and pushes on.</p>

<p>Oops, I meant to describe the syntax but instead I wrote down the entire interpreter semantics, because <em>it fits in three sentences</em>.</p>

<p>The exception to the “whatever is not whitespace is a word” rule is that the interpreter is not the only piece of Forth code that can consume input. For example, <code>(</code> is a word that reads input and discards it until it finds a <code>)</code> character. That&#39;s how comments work – the interpreter sees the <code>(</code> with a space after it, runs the word, and then the next character it looks at is after the comment has ended. You can trivially define <code>(</code> in one line of Forth.</p>

<h2 id="why-the-hell-would-i-use-that">WHY THE HELL WOULD I USE THAT</h2>

<p>There are practical reasons:</p>
<ul><li>You need something tiny and reasonably powerful, and you don&#39;t care about memory safety</li>
<li>I&#39;m not sure I can think of any others</li></ul>

<p>And there are intangible reasons:</p>
<ul><li>Implementing a programming language that fits into a few kilobytes of RAM, that you understand every line of, that you can build one piece at a time and extend infinitely, makes you feel like a god-damn all-powerful wizard</li></ul>

<p>Part of the mystique of Forth is that you can get very metacircular with it – control flow words like IF and FOR are implemented in Forth, not part of the compiler/interpreter. So are comments, and string literals. The compiler/interpreter itself is usually, in some way, written in Forth. It turns out that you can discard virtually every creature comfort of modern programming and still end up with a useful language that is extensible in whatever direction you choose to put effort into.</p>

<p>Forth enters that rarefied pantheon of languages where the interpreter is, like, half a page of code, written in itself. In many ways it&#39;s kind of like a weird backwards lisp with no parentheses. And it can be made to run on the tiniest hardware!</p>

<p>The mental model for bootstrapping a Forth system goes something like:</p>
<ul><li>Write primitive words in assembly – this includes the complete Forth “VM”, as distinct from the Forth language interpreter/compiler. The set of built-in words can be very, very small – in the document “<a href="http://www.exemark.com/FORTH/eForthOverviewv5.pdf">eForth Overview</a>” by C. H. Ting, which I have seen recommended as an excellent deep-dive into the details of how to build a Forth environment, Ting states that his system is built with 31 “primitive” words written in assembly.</li>
<li>Hand-assemble “VM bytecode” for the interpreter/compiler and required dependencies – because of the extreme simplicity of the VM, you can generally program your macro assembler to do this job, and so this can meaningfully resemble the act of simply writing Forth code directly</li>
<li>Write all new words using the interpreter/compiler you just got running</li></ul>

<p>I say “interpreter/compiler” and not “interpreter and compiler” because they are literally mixed together; there is a global flag that determines whether the interpreter is in “compile mode” or not. It is done this way because it turns out that if you add the ability to mark a word as “always interpret, even in compile mode”, you have added the ability to extend the compiler in arbitrary ways.</p>

<h2 id="what-sucks-about-writing-forth">WHAT SUCKS ABOUT WRITING FORTH</h2>

<h3 id="any-word-that-takes-more-than-two-or-three-parameters-is-a-nightmare-to-read-or-write">Any word that takes more than two or three parameters is a nightmare to read or write</h3>

<p>Right now in my codebase I have a word that uses two global variables because I cannot deal with juggling all of the values on the stack. This word is absolutely not re-entrant and at some point I&#39;m going to need to rewrite it so that it is, and I am <em>not looking forward to it</em>. If I had local variables, it would be substantially less of a problem. But there&#39;s also part of me that thinks there must be some way to rewrite it to be simpler that I haven&#39;t figured out yet.</p>

<p>There&#39;s another word in my codebase that takes 4 or 5 parameters that I managed to write by breaking it up into, like, 8 smaller words, over the course of writing / rewriting for like an hour or two. I felt pretty proud when I finally got it working, but honestly I think it would have been pretty trivial to write in C with local variables. I miss them.</p>

<h3 id="shit-crashes">Shit crashes</h3>

<p>Remember the part about no memory safety? Yeah, there&#39;s <em>all kinds</em> of ways a wayward Forth system can go wrong. I forgot a <code>DROP</code> once in a frequently-used word and my computer hard-locked when the stack overflowed. (To be fair: my computer was a 286 running MS-DOS, so I was already in a situation where programming it meant rebooting it when I inevitably fucked something up.)</p>

<h3 id="nonexistent-error-messages">Nonexistent error messages</h3>

<p>The only error message my Forth system has is, if it doesn&#39;t recognize the word “foo”, it prints “foo?”  If, for example, I write an <code>IF</code> statement, but forget to end it with <code>THEN</code>, I don&#39;t get a compile error, I get — you guessed it — a runtime hard crash.</p>

<h2 id="what-rules-about-writing-forth">WHAT RULES ABOUT WRITING FORTH</h2>

<h3 id="it-s-compact-as-hell">It&#39;s compact as hell</h3>

<p>The majority of words I write are literally one line of code. They do a small job and get out.</p>

<h3 id="it-s-direct-as-hell">It&#39;s direct as hell</h3>

<p>Building abstractions in Forth is... different than building abstractions in other languages.  It&#39;s still a really core, important thing, but as building complex / expensive code is so much work, stacking expensive abstractions on top of each other is not really tenable. So you&#39;re left with very basic building blocks to do your job as straightforwardly as possible.</p>

<h3 id="you-are-absolutely-empowered-to-fix-any-problems-with-your-particular-workflow-and-environment">You are absolutely empowered to fix any problems with your particular workflow and environment</h3>

<p>People turn Forth systems into tiny OSes, complete with text editors, and I absolutely did not understand this impulse until I wrote my own. The Forth interpreter is an interactive commandline, and you can absolutely make it your own. Early on I wrote a decompiler, because it was easy. It&#39;s like half a screen of code. There are some cases it falls down on, but I wrote it in like a half hour and it works well enough for what I need.</p>

<h3 id="everything-is-tiny-and-easy-to-change-or-extend">Everything is tiny and easy to change or extend</h3>

<p>Remember when I said I wrote a decompiler because it was easy? Other things I changed in an evening or two:</p>
<ul><li>Added co-operative multitasking (green threads)</li>
<li>Custom I/O overrides, so my interactive REPL sessions could be saved to disk</li>
<li>Rewrote the core interpreter loop in Forth</li>
<li>Rewrote the VM loop to not use the C stack</li>
<li>Instrumenting the VM with debug output to catch a crash bug</li></ul>

<p>One of the things on my todo list is a basic interactive step-through debugger, which I suspect I&#39;ll be able to get basically up and running within, like, an hour or two? When things stay tiny and simple, you don&#39;t worry too much about changing them to make them better, you just do it.</p>

<h3 id="if-you-have-ever-wanted-an-assembly-code-repl-this-is-about-as-close-as-you-re-going-to-get">If you have ever wanted an assembly code REPL, this is about as close as you&#39;re going to get</h3>

<p>Forth is a dynamic language in which the only type is “a 16-bit number” and you can do whatever the fuck you want with that number. This is dangerous as hell, of course, but if you are writing code that has no chance of having to handle arbitrary adversarial input from the internet (like my aforementioned MS-DOS 286), it is surprising how refreshing and fun this is.</p>

<h2 id="this-sounds-interesting-what-is-the-best-way-to-learn-more">THIS SOUNDS INTERESTING, WHAT IS THE BEST WAY TO LEARN MORE</h2>

<p>I honestly do not know if there is a better way to understand Forth than just trying to build your own, and referring to other Forth implementations and documents when you get stuck. It&#39;s been my experience that they just don&#39;t make sense until you&#39;re neck deep into it.  And it&#39;s tiny enough that you feel <em>good</em> about throwing away pieces that aren&#39;t working once you understand what does work.</p>

<p>I&#39;ve found the process of writing my own Forth and working within its constraints to be <em>far</em> more rewarding than any time I have tried working with existing Forths, even if on occasion I have wished for more complex functionality than I&#39;m willing to build on my own.</p>

<h2 id="what-have-i-learned-from-all-this">WHAT HAVE I LEARNED FROM ALL THIS</h2>

<p>I&#39;m very interested in alternate visions of what computing can look like, and who it can be for. Forth has some very interesting ideas embedded in it:</p>
<ul><li>A system does not have to be complex to be flexible, extensible, and customizable</li>
<li>A single person should be able to understand a computing system in its entirety, so that they can change it to fit their needs</li></ul>

<p>I find myself wondering a lot what a more accessible Forth might look like; are there more flexible, composable, simple abstractions like the Forth “word” out there? Our current GUI paradigms can&#39;t be irreducible in complexity; is there a radically simpler alternative that empowers individuals? What else could an individual-scale programming language look like, that is not only designed to enable simplicity, but to outright disallow complexity?</p>

<p>Forth is a radical language because it does not “scale up”; you cannot build a huge system in it that no one person understands and expect it to work. Most systems I have used that don&#39;t scale up – Klik &amp; Play, Hypercard, Scratch, that sort of thing – are designed for accessibility. Forth is not; it&#39;s designed for leverage. That&#39;s an interesting design space I wasn&#39;t even really aware of.</p>

<p>The lesson that implementing abstractions as directly as possible enables you to more easily change them is a useful one. And the experience of succeeding in building a programming environment from scratch on an underpowered computer in a couple of weeks is something I will bring with me to other stalled projects – you can sit down for a couple of hours, radically simplify, make progress, and learn.</p>

<p><a href="https://blog.information-superhighway.net/tag:forth" class="hashtag"><span>#</span><span class="p-category">forth</span></a> <a href="https://blog.information-superhighway.net/tag:retrocomputing" class="hashtag"><span>#</span><span class="p-category">retrocomputing</span></a> <a href="https://blog.information-superhighway.net/tag:essays" class="hashtag"><span>#</span><span class="p-category">essays</span></a></p>
]]></content:encoded>
      <guid>https://blog.information-superhighway.net/what-the-hell-is-forth</guid>
      <pubDate>Wed, 20 Feb 2019 20:51:15 +0000</pubDate>
    </item>
  </channel>
</rss>