Windows symlinks and how they help with WoW addon development

The Battle.net client has a very irritating bug that only really affects people who’re actively developing World of Warcraft addons on Windows: it can get stuck checking whether the game client needs to be updated, stopping you from launching the game. If you’ve ever seen it stick on “Updating” with its status message just saying “Initializing…” for minutes at a time, followed by it asking for you to approve admin permissions for the client so it can try again (regardless of whether it already has them), then you’ve been bitten by this bug.

I’m told it’s because of some sort of interaction with the (hidden) .git folders (and the vast number of files contained within them) in addons that’ve been checked out for development. Fortunately, there’s a way to keep your source-controlled addon development practices while also stopping the Battle.net client from getting stuck, and that is: Windows symlinks.

What’s a symlink? It’s short for “symbolic link”, and it’s a way to tell your operating system that file or directory should pretend to be in multiple places at once. In practice, it’s a useful tool to either organize files that other programs need to look at, or to avoid having to manually synchronize changes between files.

In the case of World of Warcraft, for whatever reason — presumably some specifics of the exact choice of file-monitoring APIs Blizzard chose to use — if you make your addon directories into symbolic links to another place that you’ve checked them out, it’ll stop the Battle.net client from getting confused.

You may want to read a longer explanation of all the details about symbolic links and what you might need to do to enable them, but as a quick summary:

  1. Create a new directory for your development checkouts of addons to live. I’ll use c:\src\wow\ for these examples.
  2. Move all your existing checked out addons there. You can leave other addons that aren’t a git checkout where they’ve always lived. (I think it’s handy for getting a quick overview of what’s yours, honestly.)
  3. Create the symbolic links, by opening a command prompt and typing something like this for each of your addons: mklink /D "C:\World of Warcraft\_retail_\Interface\Addons\MyAddon" "C:\src\wow\MyAddon"

Automating that

Now, I found myself with a classic programmer problem of wanting to avoid a very tedious task: typing almost the same mklink command 41 times in a row. As such, I spent longer than the tedious option would have taken to write a script that’ll do it for me. Fortunately, it’ll also serve as documentation for me on what to do next time I need a new addon checked out. 🤩

Personally, I use WSL to run Linux tools on my Windows machine for development purposes, and so I naturally went for a bash script. I could totally have kept it all inside pure Windows and written a batch script or similar, but that sounds quite awful and so I didn’t.

A lot of the time writing this stemmed from the important complication: the WSL / Linux ln command will not create a Windows symbolic link. As such, I had to learn how to call out to cmd.exe and also to convert WSL file paths into Windows file paths. I’d never had to do this before, so it was time well spend.

The result is this link.sh script. When run from inside your development addons folder, it’ll make a symbolic link in the main WoW addons folder to every addon you have checked out there. Feel free to use it if you want, just note that you may need to change the WoW install location that’s hardcoded at the top of the script.

Hosting: Linode

Last year, after almost a decade of using them as my host, WebFaction started shutting down. They’d been sold to GoDaddy back in 2018 and had mysteriously stopped working on any feature-development about then, so it wasn’t a huge surprise.

So, I bit the bullet, and switched to Linode.

This isn’t entirely something I’d recommend to everyone who’s used to using a service like WebFaction. WebFaction, despite being more fiddly than many hosts, still handled a lot of things for you. It ran servers and managed their configuration; you just told it that you wanted an app of type X to run, and it did.

Linode, by contrast, is a VPS host. That means you get a virtual server, and have to manage it yourself. So I’m spending $5/month for their shared 1GB plan, and it gets me a virtual machine (I picked Debian) that I have to ssh into and cope with.

Now, Linode does offer a lot of guides to setting up servers, which I found very helpful. It’s still a fiddly learning experience — tuning the software to work on your VPS is a pain, and I wound up with Apache using slightly too many resources that led to a CPU spiral over a few days.

Still, if you’re willing to put the work in, or just have more complex needs than "HTML is here", it’s a good hosting option.

World of Warcraft addon packaging but with GitHub Actions this time

I wrote about addon-packaging a year ago, mostly in the context of wanting to package addons for Classic. Since then, the environment has shifted a bit due to Overwolf buying Curse, and also GitHub Actions being released.

There’s no particular reason to think Overwolf will be any worse a steward of Curse and its addon-tooling than Twitch was, but controlling one’s own packaging and thus being able to easily shift to other platforms if needed appeals.

If you’re sticking your addon on GitHub anyway, it seems to make some sense to use GitHub Actions rather than (lightly) abusing Travis’ continuous-integration features for something which is only continuous-integration if you squint at it.

Basic setup

You need to have an addon which is configured for the BigWigs packager to know what it’s doing. The previous post walks you through that — just don’t do the "Travis" part.

Making actions

Much like Travis was controlled by a .travis.yml file, Actions are defined in your repo in a .github/workflows/ directory. GitHub will walk you through making one on the website, or you can just make the file and commit it.

Create .github/workflows/package.yml:

name: Package Addon

on:
  push:
    branches: [ main ]
    tags: [ '*' ]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Create Package
        uses: BigWigsMods/packager@master
        env:
           CF_API_KEY: ${{ secrets.CF_API_KEY }}
           WOWI_API_TOKEN: ${{ secrets.WOWI_API_TOKEN }}
           GITHUB_OAUTH: ${{ secrets.GITHUB_TOKEN }}

You’ll need to configure those secrets, which you can do through the "settings" page of your repo. The previous post walks you through generating the relevant API keys. You can skip the GitHub one, as Actions magically gives you a token for that.

And… that’s it. Every time you push a commit or tag to this repo, the BigWigs packager will run and upload the addon to the sites you’ve configured API and TOC keys for.

EDIT 2021-07-13: the bigwigs packager has an official action now, so I replaced the use of curl to fetch the script.

World of Warcraft Classic addon packaging

UPDATE: there’s a sequel to this post, which tells you how to do this with GitHub Actions instead.

If you’re writing an addon which needs to work in just the retail version of WoW, or just in Classic, this post doesn’t really apply to you. But if you want to maintain versions of your addon that work in both environments, you may want some tooling around that. Or you might just want to handle your own packaging, in which case this will also help.

The Environment

World of Warcraft addon development has been heavily shaped by WowAce/CurseForge, which popularized a continuous-integration style of development. You start a project, it gives you a code repository, and it automatically packages up the contents of that repository for distribution. It also lets you tag releases, to give addon users stable points to work with.

Curseforge supports having retail and Classic versions of an addon living in the same project, but doesn’t support any way of packaging that up automatically. It’ll package up the current state of the master branch of your repository, and flag it as classic/retail depending on the TOC metadata. This makes keeping a dual release impractical.

Why not just keep entirely separate projects?

It’s so much more wooooooork.

But seriously, Classic is based on a fairly recent version of the WoW client, and is nearly completely API-compatible with the retail version. This means that most addons are going to need fairly minor tweaks to work in both. A long-lived classic branch with the necessary tweaks and cherry-picked future feature updates is very practical here.

Run your own packager

The BigWigs addon, for various historical reasons, have written their own version of the WowAce addon-packager script. It supports everything we want to do, unlike the standard WowAce packager.

You need to add some metadata to your addon’s TOC file so the packager script knows where to upload it. Find your project ID in the sidebar of your project page, and add a line like this:

## X-Curse-Project-ID: [your-project-id]

Technically, you can stop at this point, download the packager, and manually run the entire process from the command line. But again, that’s a lot of work, and I’d rather keep the continuous deployment workflow. If you’d rather go do that, instructions are here.

For the rest of this I’m going to aim for continuous deployment via keeping your addon in a GitHub repository, and running the packager through Travis-CI.

GitHub

Make a repository for your addon on GitHub. Import the existing code to there by going to your current checkout of the code and doing:

$ git remote set-url origin [your-github-url]
$ git push --mirror origin

Now go to your addon’s source settings on WowAce/CurseForge and switch it to point to your new github repo. This will disable all the normal automatic-packaging behavior, so our next step will get that back.

Travis

Add a new file to the top level of your repository: .travis.yml

language: minimal

script:
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash

notifications:
  email:
    on_success: never
    on_failure: always

You can expand this if you want to perform further checks. I run luacheck on everything to make sure there’s no likely errors, for instance.

Get a Travis CI account. You can sign in with your GitHub account, and it’ll have access to your repositories.

Repositories list

Enable your addon’s repository, and then go into its settings. Disable "Build pushed pull requests".

Go to the CurseForge API tokens page and generate an API token.

In the "Environment Variables" section of the Travis settings, add one called CF_API_KEY with the value you just generated.

You’re now back to having continuous integration. Every time you push a commit to your github repo, the packager will run and upload an alpha build to your project. Release builds will be triggered by tags, just like they were when you were on CurseForge.

But what about Classic?

Depending on your addon, the amount of changes you need to make will vary. If you can make a version that works simultaneously in retail and classic, or only needs minor tweaks…

Single branch

All you’ll need to do is adjust your TOC file a bit:

#@retail@
## Interface: 80200
#@end-retail@
#@non-retail@
# ## Interface: 11302
#@end-non-retail@

…and adjust your .travis.yml so that the script section is:

script:
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash -s -- -g 1.13.2

Now every time you push, a retail and classic build will be triggered. The TOC will be adjusted for each build so the correct Interface version is listed.

Multiple branch

You may need more extensive changes, or just prefer to avoid assorted feature-detection conditionals in your addon. If so, a long-lived classic branch is an option.

Make the branch:

$ git checkout -b classic

…and adjust your .travis.yml so that the script section is:

script:
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash -s -- -g 1.13.2

Now update your addon for classic, and whenever you push this branch, a new release will be uploaded for classic only.

For common changes, remember that you can cherry-pick patches between branches like so:

$ git commit -m "Important cleanup of stuff for retail"
[master cec3e72] Important cleanup of stuff for retail
$ git checkout classic
$ git cherry-pick cec3e72

…though you might need to resolve some conflicts.

I want WoWInterface too!

The BigWigs packager will upload to WoWI as well. Unfortunately, WoWI doesn’t let you have versions for retail/classic in the same addon, so you’ll need to start a new addon for your classic branch. Once you have that, update each branch’s TOC with:

## X-WoWI-ID: [wowi-addon-id]

You can find the addon ID in the URL of your addon.

If you’re using the single-branch setup above, you’ll need to override the ID for one of them. You can do that in .travis.yml by finding the retail/classic script call and adding -w [wowi-addon-id] to it.

Now go to the WoWInterface API token page and generate a token. Add it to your Travis-CI environment variables as WOWI_API_TOKEN.

WoWInterface only supports release versions of addons, so it’ll only upload there whenever you push a tag.

UPDATE: there’s a sequel to this post, which tells you how to do this with GitHub Actions instead.

World of Warcraft: Classic memories

WoW Classic is launching in two days time, and it’s stirring up some memories. Possibly tinted by a delightful haze of nostalgia.

…I modded my UI heavily

The vanilla WoW memory that most jumps to my mind these days is the first time I really fell off the rails of playing cautiously.

I was playing with my spouse, and we were questing through Stranglethorn Jungle; I as a Warlock and them as a Priest. I did not yet appreciate what being a Warlock meant. I would soon learn.

Vanilla was an unforgiving game by modern standards, for all that it was vastly more friendly than its contemporaries. Regular play involved carefully picking your fights, and anything more than one or two opponents at a time could very easily overwhelm you. Whole classes and specs were barely viable for solo play, or alternately completely useless for endgame play.

We were doing something which involved having to fight our way through camps of trolls. These were clustered fairly tightly together, so we were carefully separating them and killing them one at a time. It was slow. Then something unfortunate happened, and we got a group of them at once. Obviously we were going to die. It was frantic; I was throwing damage-over-time spells at everything, and my spouse was furiously healing me and my pet. And then… everything died. We didn’t even get close to losing the fight. This was a surprise.

Naturally, we did it again. It turned out that a Warlock with a tank-pet and a Priest on tap for healing was basically indestructible against entire camps of enemies, once you knew what you were doing.

There’s nothing so special about that discovery itself, but I still fondly remember that moment when everything came together and we blew past what we thought our limits were.

It’s like that sometimes

I reserved that Warlock’s name again for the Classic launch. I rather doubt that I’ll stick with it; the nostalgia won’t live up to the unrelenting grind. But it’ll be interesting to dip my toes back in and see a different era once more.

WebFaction helpers: HTTPS and www

I like WebFaction, and have been using them for years now, but I’m the first to admit they’re a bit less… friendly… in some regards than many hosts.

I referenced a few of these unfriendly matters back when I mentioned switching to them, with an offhand "so I solved that". But I’ve decided to go into a little more detail now on one of these issues — common site redirections. Specifically, adding / removing www from your URL, and enforcing HTTPS on a domain.

Other simple hosts I’ve used have just had a checkbox for these features in your settings. For WebFaction, however, you need to write your own mini-application to handle it. When I say "mini", I mean it — all you need is the bare minimum of an app configured enough that it’ll interpret an .htaccess file.

I’ll assume from here that you’re familiar with the general WebFaction terminology, and the distinction between "application", "domain", and "website".

Remove www

Make a new application with type "Static" and subtype "Static/CGI/PHP-7.2".

Name the application redirect_www.

SSH in, and in the application directory create a file called .htaccess with the contents:

RewriteEngine on

RewriteCond %{HTTPS} =on
RewriteRule ^(.*)$ - [env=proto:https]
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ - [env=proto:http]

RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
RewriteRule ^(.*)$ %{ENV:proto}://%1%{REQUEST_URI} [R=301,QSA,NC,L]

This is more complicated than it strictly has to be, because it checks and remembers whether the site is on HTTP/S without you needing to explicitly configure it or make multiple versions of the application. I wanted something generic, because I have a bunch of different websites hosted.

An "add www" application is a fairly simple modification to this.

Assign it to a new website, with the domain www.whatever-your-domain-is.com.

Enforce HTTPS

Make a new application with type "Static" and subtype "Static/CGI/PHP-7.2".

Name the application redirect_https.

SSH in, and in the application directory create a file called .htaccess with the contents:

RewriteEngine On

RewriteRule ^\.well-known/ - [NC,L]

RewriteCond %{HTTPS} !on [NC]
RewriteCond %{HTTP:X-Forwarded-SSL} !on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This will redirect so long as HTTPS isn’t already enabled, ignoring the .well-known subdirectory which is required to not be redirected for things like letsencrypt certificate renewal.

Assign this application to a non-secure website with the same domain as a secure website you’re hosting.

Why didn’t I respond to your pull request?

I have some fairly popular open source packages up on GitHub. Happily, I get people submitting pull requests, adding features or fixing bugs. It’s great when this happens, because people are doing work that I don’t want to do / haven’t gotten to yet / didn’t think of.

…but I’m pretty bad at responding to these. They tend to languish for a while before I get to them. There’s a decent number which I’ve never even replied to.

Why is this?

Fundamentally, it’s because reviewing a pull request is potentially a lot of work… and the amount of work isn’t necessarily obvious up-front. This means I only tend to do reviews for anything which isn’t obviously trivial when I’m feeling energetic and like I have a decent amount of free time.

First, there’s some common potential problems which might turn up:

  1. It does something I don’t want to include in the project. This is the only outright deal-breaker. Project owner’s prerogative.

  2. It doesn’t work. This happens more often than you’d think, generally because the submitter has written code for the exact use-case they had, and hasn’t considered what will happen if someone tries to use it in a different way.

  3. It works, but not in the way I want it to. For instance, it might behave inconsistently with existing features, and I’d want it adjusted to match.

  4. It should be written differently. This tends to include feedback like “you should use this module” / “this code should really go over here” / “this duplicates code”.

  5. It has coding style violations. Things like indentation, variable names, or trailing whitespace. These aren’t functional problems, but I still don’t want to merge them, because I’d just have to make another commit to fix them myself.

Once I’ve read the patch and given this feedback, which might itself take a while since design feedback and proper testing that exercises all code paths isn’t necessarily quick, I’ll respond asking for changes. Then there’s an unknown wait period while the submitted finds time to respond to those changes. Best-case for me, they agree with everything I said, make all requested changes perfectly, and update their pull request with them! Alas, people don’t always think I’m a font of genius, so there’s an unknowable amount of back-and-forth needed to find a compromise position we both agree on. This generally involves enough time between responses that the specifics of the patch aren’t in my head any more, so I have to repeat the review process each time.

What can I do better?

One obvious fix: delegate more. Accept more people onto projects and give them commit access, so I don’t have to be the bottleneck. I’m bad at doing this, because my projects tend to start as “scratch my itch” tasks, and I worry about them drifting away from code I’m personally happy with. Plus, I feel that if the problem is “I don’t review patches promptly”, “make someone else do it instead” is perhaps disingenuous as a response. 😀

So, low-hanging fruit…

Coding style violations, despite being trivial, are probably the most common sources of a patch sitting unmerged as I wait for someone to respond to a request to fix them. This is kind of my fault, because I have a bad habit of not documenting the coding style I expect to be used in my projects, relying on people writing consistent code by osmosis. Demonstrably, this doesn’t work.

As such, I’m starting to add continuous integration solutions like Travis to my projects. Without any particular work on my part, this lets me automatically warn contributors about coding style concerns which can be linted for, via tools like flake8 or editorconfig. If their editing environment is set up for it, they’ll get feedback as they write their patch… and if not, they’ll be told on GitHub when a pull request fails the tests, and don’t have to wait for me to get back to them about it.

Build Status

The “it doesn’t work” issue can be worked into this framework as well, with a greater commitment to writing tests on my part. If my project is already well-covered, I can have the CI build check test coverage, and thus require that contributors are providing tests that cover at least most of what they’re submitting, and don’t break existing functionality.

This should reduce me to having to personally respond to a smaller set of “how should this be written?” issues, which I think will help.

Sublime Text packages: working in 2 and 3

I maintain the Git package for Sublime Text. It’s popular, which is kind of fun and also occasionally stressful. I recently did a major refactor of it, and want to share a few tips.

I needed to refactor it because, back when the Sublime Text 3 beta came out, I had made a branch of the git package to work with ST3, and was thus essentially maintaining two versions of the package, one for each major Sublime version. This was problematic, because all new features needed to be implemented twice, and wound up hurting my motivation to work on things.

Why did I feel the need to branch the package? Well…

The Problem

Sublime Text is currently suffering from a version problem. There’s the official version, Sublime Text 2, and the easily available beta version, Sublime Text 3. They’re both in widespread use. This division has ground on for around three years now, and is a pain to deal with.

It’s annoying, as a plugin developer, because of a few crucial differences:

Sublime Text 2:

  • Uses Python 2.7.
  • Puts all package contents into a shared namespace.

Sublime Text 3:

  • Uses Python 3.3.
  • Puts all package contents into a module named for the package.
  • Has some new APIs, removes some old APIs.

…yes, the Sublime Text 2 / 3 situation is an annoyingly close parallel to the general Python 2 / 3 situation that is itself a subset of the Sublime problem. I prefer less irony in my life.

Python

What changed in Python 3 is a pretty well-covered topic, which I’m not going to go into here.

Suffice it to say that the changes are good, but introduce some incompatibilities which need code to be carefully written if it wants to run on both versions.

Imports

If your plugin is of any size at all, you probably have multiple files because separation of code into manageable modules is good. Unfortunately, the differing way that packages are treated in ST2 vs ST3 makes referring to these files difficult.

In Sublime Text 2, all files in packages are in a great big “sublime” namespace. Any package can import modules from any other package, perhaps accidentally.

For instance, in ST2…

import comment

…gets us the Default.comment module, which provides the built-in “toggle comment on a line” functionality. Unless some other package has a comment.py, in which case who what we’ll get becomes order-of-execution dependent.

Note the fun side-effect of this: if any package has a file which shares a name with anything in the standard library, it’ll “shadow” that and any other package which then tries to use that part of the standard library will break.

Because of these drawbacks, Sublime Text 3 made the very sensible decision to make every package its own module. That is, to get that comment module, we need to do:

import Default.comment

This is better, and makes it harder to accidentally break other packages via your own naming conventions. However, it does cause compatibility problems in two situations:

  1. You want to access another package
  2. You want to use relative imports to access files in your own package

The latter case, this is something which behaves differently depending on whether you’re inside a module or not.

# ST2:
from git import GitTextCommand, GitWindowCommand, git_root
from status import GitStatusCommand

# ST3:
from .git import GitTextCommand, GitWindowCommand, git_root
from .status import GitStatusCommand

Editing text

In Sublime Text 2 you had to call edit = view.begin_edit(...) and view.end_edit(edit) to group changes you were making to text, so that undo/redo would bundle them together properly.

In Sublime Text 3, these were removed, and any change to text needs to be a sublime_plugin.TextCommand which will handle the edit-grouping itself without involving you.

The Solution (sort of)

If you want to write a plugin that works on both versions, you have to write Python that runs on 2 and 3, and has to play very carefully around relative imports.

Python 2 / 3

A good first step here is to stick this at the top of all your Python files:

from __future__ import absolute_import, unicode_literals, print_function, division

This gets Python 2 and 3 mostly on the same page; you can largely just write for Python 3 and expect it to work in Python 2. There’s still some differences to be aware of, mostly in areas where the standard library was renamed, or when you’re dealing with points where the difference between bytes and str actually matters. But these are workable-around.

For standard library reshuffling, checking exceptions works:

try:
    # ST3
    from http.client import HTTPConnection
except ImportError:
    # ST2
    from httplib import HTTPConnection

If your package relies on something which changed more deeply, more extensive branching might be required.

Imports

If you want to access another module, as above, this is a sensible enough place to just check for exceptions.

try:
    # ST3
    from Default import comment
except ImportError:
    # ST2
    import comment

You could check for the version of Sublime, of course, but the duck-typing approach here seems more Pythonic to me.

When accessing your own files, what made sense to me was to make it consistent by moving your files into a submodule, which means that the “importing a file in the same module” case is all you ever have to think about.

Thus: move everything into a subdirectory, and make sure there’s an __init__.py within it.

There’s one drawback here, which is that Sublime only notices commands that are in top-level package files. You can work around this with a my_package_commands.py file, or similar, which just imports your commands from the submodule:

try:
    # Python 3
    from .git.core import GitInitCommand, GitFooCommand
    from .git.add import GitAddCommand
except (ImportError, ValueError):
    # Python 2
    from git.core import GitInitCommand, GitFooCommand
    from git.add import GitAddCommand

There’s one last quirk to this, which only applies to you during package development: Sublime Text only reloads your plugin when you change a top-level file. Editing a file inside the submodule does nothing, and you have to restart Sublime to pick up the changes.

I noticed that Package Control has some code to get around this, so I copied its approach in my top-level command-importing file, making it so that saving that file will trigger a reload of all the submodule contents. It has one minor irritation, in that you have to manually list files in the right order to satisfy their dependencies. Although one could totally work around this, I agree with the Package Control author that it’s a lot simpler to just list the order and not lose oneself in metaprogramming.

Editing text

Fortunately, sublime_plugin.TextCommand exists in Sublime Text 2, with the same API signature as in Sublime Text 3, so all you have to do here is wrap all text-edits into a TextCommand that you execute when needed.

Conclusion

Getting a package working in Sublime Text 2 and 3 simultaneously is entirely doable, though there are some nuisances involved, which is appropriate given that “run in Python 2 and 3 simultaneously” is a subset of the problem. That said, if you do what I suggest here, it should largely work without you having to worry about it.

Wikimedia

I mentioned that I hadn’t been updating this blog, and that wasn’t just a matter of there being nothing to talk about.

Back in July I got laid off by DeviantArt. Since that was their second layoffs round of 2015, I think it’s fair to say that they’re having some problems.

This was non-ideal for me. In retrospect, I should probably have started looking around for a new job after the first layoffs round, but I’ll count that as a learning experience.

Fortunately, I then spent a month on downtime and relaxing, because I’d been terrible at taking vacation time at DeviantArt and they thus had to pay out a lot of vacation hours to lay me off.

Now I’m part of the Visual Editor team at the Wikimedia Foundation. I help people edit Wikipedia, essentially.

Migrating from Jekyll to WordPress

Funnily enough, there aren’t all that many resources for people who’re moving from Jekyll to WordPress. I took some advice from a post by Fabrizio Regini, but had to modify it a bit, so here’s what I figured out…

My starting point was a Jekyll-based site stored on github. Comments were stored using Disqus.

As a first step, I installed WordPress on my hosting. This was, as they like to boast, very easy.

Next I had to get all my existing content into that WordPress install. I decided the easiest way to do this was to use the RSS import plugin that WordPress recommends. So I added an RSS export file to my Jekyll site and ran Jekyll to have it build a complete dump of all my posts which I could use.

Here I ran into a problem. I’d set up my new WordPress site on PHP 7… and the RSS importer wasn’t able to run because it was calling a removed function. It was just a magic-quotes-disabling function, so I tried editing the plugin to remove it. However, after doing this I found that running the importer on my completely-valid (I checked) RSS file resulted in every single post having the title and contents of the final post in the file. So, plugin debugging time!

While doing this I discovered that the RSS importer was written using regular expressions to parse the XML file. Although, yes, this is about as maximally compatible as possible, I decided that it was better not to go down the rabbit hole of debugging that, and just rewrote the entire feed-parsing side of it to use PHP’s built-in-since-PHP-5 SimpleXML parser. This fixed my title/contents problem.

My version of the plugin is available on github. I can’t say that I tested it on anything besides the specific RSS file that I generated, but it should be maintaining the behavior of the previous plugin.

With all my posts imported, I went through and did a little maintenance:

  • The import gave me post slugs which were all auto-generated from the title, while some of mine in Jekyll had been customized a bit, so I updated those to keep existing URLs working.
  • All images in posts needed to be updated. I went through and fixed these up by uploading them through WordPress.
  • Some markup in posts needed to be fixed. Mostly involving <code> tags.

Next came importing comments from Disqus. I tried just installing the Disqus plugin and letting it sync, but it seems that relies on you having WordPress post IDs associated with your comments… which I naturally didn’t. So I went out and found a Disqus comment importer plugin… which, much like the RSS importer, was broken. It expects a version of the Disqus export file which was current around 5 years ago, when it was last updated.

Thus we have my version of the Disqus comment importer plugin. It tries to work out the ID of your posts by looking at the URL. This works pretty well, but I did have to edit a few of the URLs in the export file to make sure they matched my current permalink structure. If you’ve never changed your permalinks, you should be good without that step.

Migration: complete.