World of Warcraft Classic addon packaging

If you’re writing an addon which needs to work in just the retail version of WoW, or just in Classic, this post doesn’t really apply to you. But if you want to maintain versions of your addon that work in both environments, you may want some tooling around that. Or you might just want to handle your own packaging, in which case this will also help.

The Environment

World of Warcraft addon development has been heavily shaped by WowAce/CurseForge, which popularized a continuous-integration style of development. You start a project, it gives you a code repository, and it automatically packages up the contents of that repository for distribution. It also lets you tag releases, to give addon users stable points to work with.

Curseforge supports having retail and Classic versions of an addon living in the same project, but doesn’t support any way of packaging that up automatically. It’ll package up the current state of the master branch of your repository, and flag it as classic/retail depending on the TOC metadata. This makes keeping a dual release impractical.

Why not just keep entirely separate projects?

It’s so much more wooooooork.

But seriously, Classic is based on a fairly recent version of the WoW client, and is nearly completely API-compatible with the retail version. This means that most addons are going to need fairly minor tweaks to work in both. A long-lived classic branch with the necessary tweaks and cherry-picked future feature updates is very practical here.

Run your own packager

The BigWigs addon, for various historical reasons, have written their own version of the WowAce addon-packager script. It supports everything we want to do, unlike the standard WowAce packager.

You need to add some metadata to your addon’s TOC file so the packager script knows where to upload it. Find your project ID in the sidebar of your project page, and add a line like this:

## X-Curse-Project-ID: [your-project-id]

Technically, you can stop at this point, download the packager, and manually run the entire process from the command line. But again, that’s a lot of work, and I’d rather keep the continuous deployment workflow. If you’d rather go do that, instructions are here.

For the rest of this I’m going to aim for continuous deployment via keeping your addon in a GitHub repository, and running the packager through Travis-CI.

GitHub

Make a repository for your addon on GitHub. Import the existing code to there by going to your current checkout of the code and doing:

$ git remote set-url origin [your-github-url]
$ git push --mirror origin

Now go to your addon’s source settings on WowAce/CurseForge and switch it to point to your new github repo. This will disable all the normal automatic-packaging behavior, so our next step will get that back.

Travis

Add a new file to the top level of your repository: .travis.yml

language: minimal

script:
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash

notifications:
  email:
    on_success: never
    on_failure: always

You can expand this if you want to perform further checks. I run luacheck on everything to make sure there’s no likely errors, for instance.

Get a Travis CI account. You can sign in with your GitHub account, and it’ll have access to your repositories.

Repositories list

Enable your addon’s repository, and then go into its settings. Disable "Build pushed pull requests".

Go to the CurseForge API tokens page and generate an API token.

In the "Environment Variables" section of the Travis settings, add one called CF_API_KEY with the value you just generated.

You’re now back to having continuous integration. Every time you push a commit to your github repo, the packager will run and upload an alpha build to your project. Release builds will be triggered by tags, just like they were when you were on CurseForge.

But what about Classic?

Depending on your addon, the amount of changes you need to make will vary. If you can make a version that works simultaneously in retail and classic, or only needs minor tweaks…

Single branch

All you’ll need to do is adjust your TOC file a bit:

#@retail@
## Interface: 80200
#@end-retail@
#@non-retail@
# ## Interface: 11302
#@end-non-retail@

…and adjust your .travis.yml so that the script section is:

script:
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash -s -- -g 1.13.2

Now every time you push, a retail and classic build will be triggered. The TOC will be adjusted for each build so the correct Interface version is listed.

Multiple branch

You may need more extensive changes, or just prefer to avoid assorted feature-detection conditionals in your addon. If so, a long-lived classic branch is an option.

Make the branch:

$ git checkout -b classic

…and adjust your .travis.yml so that the script section is:

script:
  - curl -s https://raw.githubusercontent.com/BigWigsMods/packager/master/release.sh | bash -s -- -g 1.13.2

Now update your addon for classic, and whenever you push this branch, a new release will be uploaded for classic only.

For common changes, remember that you can cherry-pick patches between branches like so:

$ git commit -m "Important cleanup of stuff for retail"
[master cec3e72] Important cleanup of stuff for retail
$ git checkout classic
$ git cherry-pick cec3e72

…though you might need to resolve some conflicts.

I want WoWInterface too!

The BigWigs packager will upload to WoWI as well. Unfortunately, WoWI doesn’t let you have versions for retail/classic in the same addon, so you’ll need to start a new addon for your classic branch. Once you have that, update each branch’s TOC with:

## X-WoWI-ID: [wowi-addon-id]

You can find the addon ID in the URL of your addon.

If you’re using the single-branch setup above, you’ll need to override the ID for one of them. You can do that in .travis.yml by finding the retail/classic script call and adding -w [wowi-addon-id] to it.

Now go to the WoWInterface API token page and generate a token. Add it to your Travis-CI environment variables as WOWI_API_TOKEN.

WoWInterface only supports release versions of addons, so it’ll only upload there whenever you push a tag.

World of Warcraft: Classic memories

WoW Classic is launching in two days time, and it’s stirring up some memories. Possibly tinted by a delightful haze of nostalgia.

…I modded my UI heavily

The vanilla WoW memory that most jumps to my mind these days is the first time I really fell off the rails of playing cautiously.

I was playing with my spouse, and we were questing through Stranglethorn Jungle; I as a Warlock and them as a Priest. I did not yet appreciate what being a Warlock meant. I would soon learn.

Vanilla was an unforgiving game by modern standards, for all that it was vastly more friendly than its contemporaries. Regular play involved carefully picking your fights, and anything more than one or two opponents at a time could very easily overwhelm you. Whole classes and specs were barely viable for solo play, or alternately completely useless for endgame play.

We were doing something which involved having to fight our way through camps of trolls. These were clustered fairly tightly together, so we were carefully separating them and killing them one at a time. It was slow. Then something unfortunate happened, and we got a group of them at once. Obviously we were going to die. It was frantic; I was throwing damage-over-time spells at everything, and my spouse was furiously healing me and my pet. And then… everything died. We didn’t even get close to losing the fight. This was a surprise.

Naturally, we did it again. It turned out that a Warlock with a tank-pet and a Priest on tap for healing was basically indestructible against entire camps of enemies, once you knew what you were doing.

There’s nothing so special about that discovery itself, but I still fondly remember that moment when everything came together and we blew past what we thought our limits were.

It’s like that sometimes

I reserved that Warlock’s name again for the Classic launch. I rather doubt that I’ll stick with it; the nostalgia won’t live up to the unrelenting grind. But it’ll be interesting to dip my toes back in and see a different era once more.

WebFaction helpers: HTTPS and www

I like WebFaction, and have been using them for years now, but I’m the first to admit they’re a bit less… friendly… in some regards than many hosts.

I referenced a few of these unfriendly matters back when I mentioned switching to them, with an offhand "so I solved that". But I’ve decided to go into a little more detail now on one of these issues — common site redirections. Specifically, adding / removing www from your URL, and enforcing HTTPS on a domain.

Other simple hosts I’ve used have just had a checkbox for these features in your settings. For WebFaction, however, you need to write your own mini-application to handle it. When I say "mini", I mean it — all you need is the bare minimum of an app configured enough that it’ll interpret an .htaccess file.

I’ll assume from here that you’re familiar with the general WebFaction terminology, and the distinction between "application", "domain", and "website".

Remove www

Make a new application with type "Static" and subtype "Static/CGI/PHP-7.2".

Name the application redirect_www.

SSH in, and in the application directory create a file called .htaccess with the contents:

RewriteEngine on

RewriteCond %{HTTPS} =on
RewriteRule ^(.*)$ - [env=proto:https]
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ - [env=proto:http]

RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
RewriteRule ^(.*)$ %{ENV:proto}://%1%{REQUEST_URI} [R=301,QSA,NC,L]

This is more complicated than it strictly has to be, because it checks and remembers whether the site is on HTTP/S without you needing to explicitly configure it or make multiple versions of the application. I wanted something generic, because I have a bunch of different websites hosted.

An "add www" application is a fairly simple modification to this.

Assign it to a new website, with the domain www.whatever-your-domain-is.com.

Enforce HTTPS

Make a new application with type "Static" and subtype "Static/CGI/PHP-7.2".

Name the application redirect_https.

SSH in, and in the application directory create a file called .htaccess with the contents:

RewriteEngine On

RewriteRule ^\.well-known/ - [NC,L]

RewriteCond %{HTTPS} !on [NC]
RewriteCond %{HTTP:X-Forwarded-SSL} !on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This will redirect so long as HTTPS isn’t already enabled, ignoring the .well-known subdirectory which is required to not be redirected for things like letsencrypt certificate renewal.

Assign this application to a non-secure website with the same domain as a secure website you’re hosting.

Why didn’t I respond to your pull request?

I have some fairly popular open source packages up on GitHub. Happily, I get people submitting pull requests, adding features or fixing bugs. It’s great when this happens, because people are doing work that I don’t want to do / haven’t gotten to yet / didn’t think of.

…but I’m pretty bad at responding to these. They tend to languish for a while before I get to them. There’s a decent number which I’ve never even replied to.

Why is this?

Fundamentally, it’s because reviewing a pull request is potentially a lot of work… and the amount of work isn’t necessarily obvious up-front. This means I only tend to do reviews for anything which isn’t obviously trivial when I’m feeling energetic and like I have a decent amount of free time.

First, there’s some common potential problems which might turn up:

  1. It does something I don’t want to include in the project. This is the only outright deal-breaker. Project owner’s prerogative.

  2. It doesn’t work. This happens more often than you’d think, generally because the submitter has written code for the exact use-case they had, and hasn’t considered what will happen if someone tries to use it in a different way.

  3. It works, but not in the way I want it to. For instance, it might behave inconsistently with existing features, and I’d want it adjusted to match.

  4. It should be written differently. This tends to include feedback like “you should use this module” / “this code should really go over here” / “this duplicates code”.

  5. It has coding style violations. Things like indentation, variable names, or trailing whitespace. These aren’t functional problems, but I still don’t want to merge them, because I’d just have to make another commit to fix them myself.

Once I’ve read the patch and given this feedback, which might itself take a while since design feedback and proper testing that exercises all code paths isn’t necessarily quick, I’ll respond asking for changes. Then there’s an unknown wait period while the submitted finds time to respond to those changes. Best-case for me, they agree with everything I said, make all requested changes perfectly, and update their pull request with them! Alas, people don’t always think I’m a font of genius, so there’s an unknowable amount of back-and-forth needed to find a compromise position we both agree on. This generally involves enough time between responses that the specifics of the patch aren’t in my head any more, so I have to repeat the review process each time.

What can I do better?

One obvious fix: delegate more. Accept more people onto projects and give them commit access, so I don’t have to be the bottleneck. I’m bad at doing this, because my projects tend to start as “scratch my itch” tasks, and I worry about them drifting away from code I’m personally happy with. Plus, I feel that if the problem is “I don’t review patches promptly”, “make someone else do it instead” is perhaps disingenuous as a response. 😀

So, low-hanging fruit…

Coding style violations, despite being trivial, are probably the most common sources of a patch sitting unmerged as I wait for someone to respond to a request to fix them. This is kind of my fault, because I have a bad habit of not documenting the coding style I expect to be used in my projects, relying on people writing consistent code by osmosis. Demonstrably, this doesn’t work.

As such, I’m starting to add continuous integration solutions like Travis to my projects. Without any particular work on my part, this lets me automatically warn contributors about coding style concerns which can be linted for, via tools like flake8 or editorconfig. If their editing environment is set up for it, they’ll get feedback as they write their patch… and if not, they’ll be told on GitHub when a pull request fails the tests, and don’t have to wait for me to get back to them about it.

Build Status

The “it doesn’t work” issue can be worked into this framework as well, with a greater commitment to writing tests on my part. If my project is already well-covered, I can have the CI build check test coverage, and thus require that contributors are providing tests that cover at least most of what they’re submitting, and don’t break existing functionality.

This should reduce me to having to personally respond to a smaller set of “how should this be written?” issues, which I think will help.

Sublime Text packages: working in 2 and 3

I maintain the Git package for Sublime Text. It’s popular, which is kind of fun and also occasionally stressful. I recently did a major refactor of it, and want to share a few tips.

I needed to refactor it because, back when the Sublime Text 3 beta came out, I had made a branch of the git package to work with ST3, and was thus essentially maintaining two versions of the package, one for each major Sublime version. This was problematic, because all new features needed to be implemented twice, and wound up hurting my motivation to work on things.

Why did I feel the need to branch the package? Well…

The Problem

Sublime Text is currently suffering from a version problem. There’s the official version, Sublime Text 2, and the easily available beta version, Sublime Text 3. They’re both in widespread use. This division has ground on for around three years now, and is a pain to deal with.

It’s annoying, as a plugin developer, because of a few crucial differences:

Sublime Text 2:

  • Uses Python 2.7.
  • Puts all package contents into a shared namespace.

Sublime Text 3:

  • Uses Python 3.3.
  • Puts all package contents into a module named for the package.
  • Has some new APIs, removes some old APIs.

…yes, the Sublime Text 2 / 3 situation is an annoyingly close parallel to the general Python 2 / 3 situation that is itself a subset of the Sublime problem. I prefer less irony in my life.

Python

What changed in Python 3 is a pretty well-covered topic, which I’m not going to go into here.

Suffice it to say that the changes are good, but introduce some incompatibilities which need code to be carefully written if it wants to run on both versions.

Imports

If your plugin is of any size at all, you probably have multiple files because separation of code into manageable modules is good. Unfortunately, the differing way that packages are treated in ST2 vs ST3 makes referring to these files difficult.

In Sublime Text 2, all files in packages are in a great big “sublime” namespace. Any package can import modules from any other package, perhaps accidentally.

For instance, in ST2…

import comment

…gets us the Default.comment module, which provides the built-in “toggle comment on a line” functionality. Unless some other package has a comment.py, in which case who what we’ll get becomes order-of-execution dependent.

Note the fun side-effect of this: if any package has a file which shares a name with anything in the standard library, it’ll “shadow” that and any other package which then tries to use that part of the standard library will break.

Because of these drawbacks, Sublime Text 3 made the very sensible decision to make every package its own module. That is, to get that comment module, we need to do:

import Default.comment

This is better, and makes it harder to accidentally break other packages via your own naming conventions. However, it does cause compatibility problems in two situations:

  1. You want to access another package
  2. You want to use relative imports to access files in your own package

The latter case, this is something which behaves differently depending on whether you’re inside a module or not.

# ST2:
from git import GitTextCommand, GitWindowCommand, git_root
from status import GitStatusCommand

# ST3:
from .git import GitTextCommand, GitWindowCommand, git_root
from .status import GitStatusCommand

Editing text

In Sublime Text 2 you had to call edit = view.begin_edit(...) and view.end_edit(edit) to group changes you were making to text, so that undo/redo would bundle them together properly.

In Sublime Text 3, these were removed, and any change to text needs to be a sublime_plugin.TextCommand which will handle the edit-grouping itself without involving you.

The Solution (sort of)

If you want to write a plugin that works on both versions, you have to write Python that runs on 2 and 3, and has to play very carefully around relative imports.

Python 2 / 3

A good first step here is to stick this at the top of all your Python files:

from __future__ import absolute_import, unicode_literals, print_function, division

This gets Python 2 and 3 mostly on the same page; you can largely just write for Python 3 and expect it to work in Python 2. There’s still some differences to be aware of, mostly in areas where the standard library was renamed, or when you’re dealing with points where the difference between bytes and str actually matters. But these are workable-around.

For standard library reshuffling, checking exceptions works:

try:
    # ST3
    from http.client import HTTPConnection
except ImportError:
    # ST2
    from httplib import HTTPConnection

If your package relies on something which changed more deeply, more extensive branching might be required.

Imports

If you want to access another module, as above, this is a sensible enough place to just check for exceptions.

try:
    # ST3
    from Default import comment
except ImportError:
    # ST2
    import comment

You could check for the version of Sublime, of course, but the duck-typing approach here seems more Pythonic to me.

When accessing your own files, what made sense to me was to make it consistent by moving your files into a submodule, which means that the “importing a file in the same module” case is all you ever have to think about.

Thus: move everything into a subdirectory, and make sure there’s an __init__.py within it.

There’s one drawback here, which is that Sublime only notices commands that are in top-level package files. You can work around this with a my_package_commands.py file, or similar, which just imports your commands from the submodule:

try:
    # Python 3
    from .git.core import GitInitCommand, GitFooCommand
    from .git.add import GitAddCommand
except (ImportError, ValueError):
    # Python 2
    from git.core import GitInitCommand, GitFooCommand
    from git.add import GitAddCommand

There’s one last quirk to this, which only applies to you during package development: Sublime Text only reloads your plugin when you change a top-level file. Editing a file inside the submodule does nothing, and you have to restart Sublime to pick up the changes.

I noticed that Package Control has some code to get around this, so I copied its approach in my top-level command-importing file, making it so that saving that file will trigger a reload of all the submodule contents. It has one minor irritation, in that you have to manually list files in the right order to satisfy their dependencies. Although one could totally work around this, I agree with the Package Control author that it’s a lot simpler to just list the order and not lose oneself in metaprogramming.

Editing text

Fortunately, sublime_plugin.TextCommand exists in Sublime Text 2, with the same API signature as in Sublime Text 3, so all you have to do here is wrap all text-edits into a TextCommand that you execute when needed.

Conclusion

Getting a package working in Sublime Text 2 and 3 simultaneously is entirely doable, though there are some nuisances involved, which is appropriate given that “run in Python 2 and 3 simultaneously” is a subset of the problem. That said, if you do what I suggest here, it should largely work without you having to worry about it.

Wikimedia

I mentioned that I hadn’t been updating this blog, and that wasn’t just a matter of there being nothing to talk about.

Back in July I got laid off by DeviantArt. Since that was their second layoffs round of 2015, I think it’s fair to say that they’re having some problems.

This was non-ideal for me. In retrospect, I should probably have started looking around for a new job after the first layoffs round, but I’ll count that as a learning experience.

Fortunately, I then spent a month on downtime and relaxing, because I’d been terrible at taking vacation time at DeviantArt and they thus had to pay out a lot of vacation hours to lay me off.

Now I’m part of the Visual Editor team at the Wikimedia Foundation. I help people edit Wikipedia, essentially.

Migrating from Jekyll to WordPress

Funnily enough, there aren’t all that many resources for people who’re moving from Jekyll to WordPress. I took some advice from a post by Fabrizio Regini, but had to modify it a bit, so here’s what I figured out…

My starting point was a Jekyll-based site stored on github. Comments were stored using Disqus.

As a first step, I installed WordPress on my hosting. This was, as they like to boast, very easy.

Next I had to get all my existing content into that WordPress install. I decided the easiest way to do this was to use the RSS import plugin that WordPress recommends. So I added an RSS export file to my Jekyll site and ran Jekyll to have it build a complete dump of all my posts which I could use.

Here I ran into a problem. I’d set up my new WordPress site on PHP 7… and the RSS importer wasn’t able to run because it was calling a removed function. It was just a magic-quotes-disabling function, so I tried editing the plugin to remove it. However, after doing this I found that running the importer on my completely-valid (I checked) RSS file resulted in every single post having the title and contents of the final post in the file. So, plugin debugging time!

While doing this I discovered that the RSS importer was written using regular expressions to parse the XML file. Although, yes, this is about as maximally compatible as possible, I decided that it was better not to go down the rabbit hole of debugging that, and just rewrote the entire feed-parsing side of it to use PHP’s built-in-since-PHP-5 SimpleXML parser. This fixed my title/contents problem.

My version of the plugin is available on github. I can’t say that I tested it on anything besides the specific RSS file that I generated, but it should be maintaining the behavior of the previous plugin.

With all my posts imported, I went through and did a little maintenance:

  • The import gave me post slugs which were all auto-generated from the title, while some of mine in Jekyll had been customized a bit, so I updated those to keep existing URLs working.
  • All images in posts needed to be updated. I went through and fixed these up by uploading them through WordPress.
  • Some markup in posts needed to be fixed. Mostly involving <code> tags.

Next came importing comments from Disqus. I tried just installing the Disqus plugin and letting it sync, but it seems that relies on you having WordPress post IDs associated with your comments… which I naturally didn’t. So I went out and found a Disqus comment importer plugin… which, much like the RSS importer, was broken. It expects a version of the Disqus export file which was current around 5 years ago, when it was last updated.

Thus we have my version of the Disqus comment importer plugin. It tries to work out the ID of your posts by looking at the URL. This works pretty well, but I did have to edit a few of the URLs in the export file to make sure they matched my current permalink structure. If you’ve never changed your permalinks, you should be good without that step.

Migration: complete.

WordPress Again

I haven’t been updating this site very often. Upon reflection, I decided that this is in part because the Jekyll workflow that I switched to was… inconvenient.

It would be possible to hack around this. I could have written some sort of simple web-app which generated a new post, committed it to git, pushed it to github, built the site, and sync’d it onto my hosting. That’d keep the ridiculous performance / security benefits of a static site, while still letting me make quick updates from wherever I happen to be. It’d even be fairly easy, at least to get something basic working.

But. I don’t really want to do that. The point of using a system like Jekyll or (before it) WordPress is to offload that particular bit of work onto someone else, who can pay attention to all of those details for me.

So, here I am on WordPress again. Hopefully, after a bit more than four years, I won’t find myself getting hacked again. 😛

Why WordPress again? Well…

It’s really popular. This does count for something. Automattic likes to point out that it around a quarter of the public web runs on it. This means there’s a lot of resources available.

To keep some of what I liked from Jekyll, I’m using Automattic’s Jetpack plugin. This gets me a lot of the fancy features from WordPress.com, including letting me keep using Markdown to write these posts. I’m also using the WP-Super-Cache plugin, because it seems that even now running uncached WordPress is just asking for trouble.

I’ll write another post soon about how to migrate from Jekyll to WordPress. There were a few bumps along the way.

Hubot

My employer has long used Skype as a team communication tool. This has some drawbacks, as I found myself complaining about way back in 2011, mostly that Skype is very much not optimized for big long-running rooms, particularly on mobile devices.

Given this, why have we stuck with it?

  • If we switch, everyone in the company needs to change their workflow to use some new tool, and most people don’t want to do this very much.
  • As such, we want to be really sure that whatever we switch to is sufficiently better than Skype that we won’t have to switch again anytime soon, because it’ll be an even harder sell to do so.
  • …but Skype really is fantastic at the “call a bunch of people” and chat, without having to care about network settings use-case. So we either need something else that’s fantastic at it, or something which’ll make it easy to keep using Skype to call a group of people you’re chatting with.
  • We have existing tools set up around Skype. We wrote a bot that announces stuff we care about in our chats, which we’re all very used to having around.

That last point gives us some incentive to make a switch now, as Skype decided to discontinue important parts of its API back in late 2013. This means that our existing integration is slowly falling apart, as the old version of Skype it has to work with becomes unable to interact with newer clients. It recently reached the point now where it cannot send messages in rooms created by newer clients, which makes it effectively useless for new projects.

So. We’re kind of looking at Slack, and part of working out if we like it is getting our bot in there, so we can see how it feels with our normal workflow. However, our bot is just a thin wrapper about Skype4Py, and porting it to use the Slack API would effectively mean rewriting it in full… which seems to be potentially wasted effort.

Enter the Hubot

Hubot is a chat bot framework, with adaptors for approximately everything. It’s fairly popular amongst the hip tech-company crowd, which our company is entirely too long to consider itself a part of.

So I decided to port our custom stuff to hubot scripts. This turned out to be pretty easy, so long as I kept the CoffeeScript reference open in a browser tab.

I’ve written:

  • A subscribe-to-deviantart-events script, which lets users/rooms sign up to be notified of the events our existing skype-bot was already announcing.
  • A Zendesk script, which can be set up as a “target” on Zendesk so we can feed tickets into the aforementioned notifier. As a bonus, I gave our helpdesk a new system they’d been requesting for ages, which announces if we’ve receieved more than X tickets over the last Y hours, as a warning that there might be a serious issue.
  • A Phabricator integration to expand references to Phabricator objects (tickets, code-reviews, commits, etc). This one I’ve actually stuck up on github for general use, since I think it has nothing DA-specific in it.

Slack seems nice, so hopefully we’ll settle on it. But if we don’t, at least I’ve invested my time in something transferable.