Two simple techniques to make your release process more asynchronous and decentralized

How following simple conventions can remove the need for unscheduled synchronous meetings to release new versions of your apps.

The Automattic Apps Division has team members distributed across 22 timezones. When it comes to shipping new versions of our apps, that spread makes the traditional synchronous and meeting-driven approach impossible.

Being a distributed team forced us to put processes in place that allow work to happen in an asynchronous and decentralized way without the need for everyone to be online simultaneously and with no ad hoc decision-making.

In this post, I want to share two techniques we use to enable release managers to start the release process, a phase we call code freeze, for a new version on a regular schedule without synchronous input from the feature teams: milestones and labels.

Note: we host the code for our open-source apps on GitHub, so the implementation details will be specific to that platform. GitLab and Bitbucket (via Jira) have similar tools.


How can a release manager know if all the pull requests scheduled for the release they’re about to code freeze have been merged?

PR authors add the milestone for the version in which their work is supposed to ship, and GitHub lets us filter pull requests by milestone. This way, the release manager can immediately know if all the necessary work already landed in the main branch. No need to ask people, “is everything ready to go”?

We use Peril, a hosted version of Danger, to remind authors to add a milestone to their PR.

What happens if there are pull requests still open on the code freeze day? That’s when labels come into play.


Alongside a milestone, each PR should have at least one label specifying the kind of change it introduces.

Like for milestones, we have a Peril check to remind authors to add labels.

The combination of a milestone with different labels allows the release manager to decide whether to delay the code freeze or reschedule open pull requests to the next version.

Let’s consider a simplified example with only two possible labels, “bug” and “enhancement,” and let’s say we’re about to code freeze version 1.2.3.

milestone \ labelbugenhancement
1.2.3wait before code freezeno need to wait
next versionno need to waitno need to wait

Only when an open PR has the current version milestone and the “bug” label will the release manager ask the author for extra input. If the fix is on its way, they’ll wait for it. Otherwise, they’ll go ahead without it and ship a new beta once it’s ready.

Sometimes, the author of a PR that’s getting close to the code freeze date will mention the release manager in a comment with a rough ETA to give them a heads up and enable them to decide whether to wait or not. We also have a cron job that, a couple of days before a code freeze, will look for open PRs for the scheduled milestone and post a Slack message to nudge the team to review and merge them before the deadline.

These workflows are always evolving, adapting to our changing needs, and becoming more and more sophisticated. Recently, we introduced a new Peril check to post a warning comment when a developer opens a PR with a milestone that’s within four days of its code freeze date.

Wouldn’t it be easier to cut the release at the predefined time regardless of open PRs, and ship the left-overs in subsequent betas? Shipping a new beta comes with an unavoidable overhead. We have automated the deployment work to the point that we only need to run a single Fastlane command, but our Excellence Wranglers still need to go through the app from the start, and every new build is a new download that our beta testers have to make. If an open PR will be ready in a few hours, it’s worth waiting for it.

We developed our milestones and labels process because our distributed setup demanded an asynchronous way to wrangle decision making when preparing an app release, but you don’t have to be a distributed team to put these kinds of systems in place.

Every team, distributed or co-located, can benefit from the introduction of asynchronous processes. Decentralizing decision-making with clear guidelines and shared context removes bottlenecks and enables individual team members to work effectively regardless of whether their peers are online.

What are your favorite tactics and strategies to streamline internal processes, such as publishing a new version of the app? Leave a comment below. I’d love to hear from you.

How We Use Feature Flagging on iOS

Great features take time to build. We release a new version of the WordPress for iOS app every two weeks. But sometimes a feature takes more than two weeks to develop. In those cases, we use feature flagging to gate in-progress features so we can continue to build and test them without exposing them to regular users before they’re ready.

With a feature flag in place, you can present different user interface elements or different menu options to a user based on which build of the app they’re running — whether that’s a local debug version, an internal testing version, or an App Store version. And it’s not just limited to UI. You can also swap out entire sections of backend logic using a feature flag.

In this post, we’ll look at how we implement feature flags in WordPress for iOS.


Build Configurations

First, we decide which factor we’ll use to determine whether a given feature flag is enabled or disabled. In our case, it’s based on the current build configuration. We have four configurations in our Xcode project: debug, release, internal test builds, and alpha test builds.

So we can toggle our feature flags, we need to be able to determine the current build configuration in code. We start by adding a BuildConfiguration enum, with an option for each type of build:

enum BuildConfiguration {
    /// Development debug build, usually run from Xcode (debug)
    case localDeveloper

    /// Continuous integration builds,
    /// sometimes used to test branches & pull requests (alpha)
    case branchTest

    /// Internally released betas (internal)
    case prereleaseTesting

    /// Production build released in the App Store (release)
    case appStore

Next, within the BuildConfiguration enum we add a single computed property, current, which returns the current build configuration:

static var current: BuildConfiguration {
    #if DEBUG
        return .localDeveloper
    #elseif ALPHA_BUILD
        return .branchTest
    #elseif INTERNAL_BUILD
        return .prereleaseTesting
        return .appStore

DEBUG, ALPHA_BUILD, and INTERNAL_BUILD are Swift flags, defined in our target’s build settings:

These allow us to differentiate between the different build configurations at runtime, by inspecting which of the Swift flags are defined. In the next step, we’ll be able to enable feature flags based on the current build configuration.

Defining Feature Flags

Our feature flags themselves are defined in another enum, imaginatively named FeatureFlag:

/// FeatureFlag exposes a series of features to be 
/// conditionally enabled on different builds.
enum FeatureFlag: Int {
    case exampleFeature
    case revisions
    case enhancedSiteCreation
	case quickStart

    /// Returns a boolean indicating if the feature is enabled
    var enabled: Bool {
        switch self {
        case .exampleFeature:
            return true
        case .revisions:
            return BuildConfiguration.current == .localDeveloper
        case .enhancedSiteCreation:
            return BuildConfiguration.current ~= [.localDeveloper, .prereleaseTesting]
		case .quickStart:
			return BuildConfiguration.current != .appStore

The enabled computed property contains the logic that determines whether a given feature flag should be enabled. We do this by comparing the current build configuration to the configuration(s) for which we’d like to enable the feature flag. In the example above, we have four feature flags:

  • exampleFeature returns true: it is always enabled.
  • revisions is only enabled for the localDeveloper build configuration.
  • enhancedSiteCreation is enabled for both localDeveloper and prereleaseTesting builds.
  • quickStart is enabled for all builds except for appStore.

You may notice that the enhancedSiteCreation case above uses a custom operator, ~=. This is defined as a static function within BuildConfiguration, which allows us to compare the current build configuration against an array of configurations:

    static func ~=(a: BuildConfiguration, b: Set<BuildConfiguration>) -> Bool {
        return b.contains(a)

This is useful if we want a feature to be available in multiple configurations.

The final step is to use our feature flags to enable or disable features or UI elements.

Using Feature Flags

Now that they’ve been defined, to use a feature flag we need to check that it’s currently enabled. For example, here we’ll expose an alert controller action for our Revisions feature only if the revisions feature flag is enabled:

if FeatureFlag.revisions.enabled {

Based on the FeatureFlag implementation above, the revisions action will only be added to our alert controller for local debug builds of the app.

In WordPress for iOS, we’ll often add a new feature flag to the app when we first start developing a feature. We can build the feature behind the flag, and then change the flag’s enabled logic to release it to testers or the App Store when we feel it’s ready.

For some bigger features, we may first expose it to our internal test builds for a round of testing, and then only open it up to the App Store configuration when we’re happy we’ve addressed any issues. Even after a feature has been released, it can also be useful to leave the flag in place for a release or two, in case you ever need to roll it back.

The WordPress for iOS app is fully open source, so if you’re interested you can check out our complete implementation of BuildConfiguration.swift and FeatureFlag.swift on Github.

Screenshot Script

When I make pull requests for user interface changes, I include screenshots so everyone else involved in the product can see what I see.  It’s especially helpful for designers to quickly compare their mockups with the final result.  Folks have asked me if I use a particular program to create them — I don’t! It’s a fairly manual process, which I’ve now automated to make life easier. Read on to learn how to create screenshot images with a single terminal command.

The Old Way

Prior to automating the process, I’d been copying and pasting screenshots onto template images to make the composite images.  Over the years, I’ve developed a method that makes it relatively quick, stockpiling template images based on the desired screenshot image layout (e.g., 2×1, 2×2, 2×6, etc.).  After creating a template layout, I copied, pasted, and positioned the screenshots onto the template. Typical templates (e.g., 2×1 or 2×2) don’t take very long; larger templates like 4×5 can take a little while since they require copying, pasting, and positioning 20 screenshots.

The New Way

I decided to write a Bash script and attempt to do everything I do manually with one command.  Now, what usually took minutes to do takes seconds. And it’s user-friendly enough to share, in case anyone else would like to give it a (screen)shot.

The script uses multiple options to perform a series of concatenations and create a single ImageMagick command.  All options are required except -f.  The breakdown of each option and its purpose follows:

-ffont of header text, Roboto.ttf by default, optional
-llayout of output image, integers in columns:rows format
-ooutput image, file name with optional path without spaces
-sscreenshot image, file name with optional path without spaces
-ttext of column header, string without spaces

Both the -s and -t options are meant to be used multiple times when running the script (i.e., one -s for each screenshot and one -t for each text label).  Basic error handling ensures that the number of screenshots and text labels input match the expected layout.

The order of the screenshot arguments determine how they will be inserted into the output image.  The first screenshot image will be in the first column and first row. The second screenshot image will be in the second column and first row if there’s only one row, or the first column and second row if there are multiple rows.  In other words, the screenshot images fill the first column before moving on to the next one, from left to right. The text labels do the same, but there’s only one row of headers, so the text labels aren’t inserted anywhere except the first row from left to right.  The script detects the dimensions of the first screenshot image and creates the output image based on that. Therefore, it should work for any device size (i.e., phone or tablet) and any orientation (i.e., portrait or landscape).  The only caveat is that the screenshots must be the same size with the same dimensions.

The compiled command run at the end of the script uses the magick command, which means ImageMagick needs to be installed on the system (I used ImageMagick 7.0.9-2 Q16 x86_64 2019-10-31).  You’ll also need to specify a font to use for the header text labels.  I use Roboto; you can use whatever you like as long as the OTF or TTF file is in the same directory as the script and the FONT constant is updated.  You can also specify any font shown by the magick identify -list font command with the -f option if you prefer to use a built-in font.

In Action

Here are a couple of example uses with explanations:

./ -l 2:1 -f TimesNewRoman -o output.png -t Text1 -t Text2 -s screenshot_01.png -s screenshot_02.png

This command will create the output.png file, which will have two columns and one row with the first column containing the “Text1” header and the screenshot_01.png image, and the second column containing the “Text2” header and the screenshot_02.png image.  The headers will use Times New Roman font.  All files are in the same directory as  The output.png image looks like this:

./ -l 2:3 -o /Users/Tyler/Documents/842_add_sorting_results_search.png -t Develop -t Layout -s ~/Downloads/Screenshot_1573434041.png -s ~/Downloads/Screenshot_1573434048.png -s ~/Downloads/Screenshot_1573434052.png -s ~/Downloads/Screenshot_1573434056.png -s ~/Downloads/Screenshot_1573434061.png -s ~/Downloads/Screenshot_1573434071.png

The command above creates the 842_add_sorting_results_search.png file, which will have two columns and three rows, with the first column containing the “Develop” header and the Screenshot_1573434041.png/Screenshot_1573434041.png/Screenshot_1573434041.png images and the second column containing the “Layout” header and the Screenshot_1573434056.png/Screenshot_1573434061.png/Screenshot_1573434071.png images.  The headers will use the default Roboto font.  The screenshot files are in the ~/Downloads directory and the 842_add_sorting_results_search.png file will be saved in the /Users/Tyler/Documents directory.  The 842_add_sorting_results_search.png image is shown below.


In Summary

Is this the best possible script?  Probably not; it doesn’t use the latest and greatest programming language and could be optimized.  But does it serve its purpose, to automate a part of my everyday life, allow me to work more efficiently, and share a helpful tool with my colleagues?  Yes! It accomplishes those objectives and that’s good enough for me.

(Oh, and it’s licensed under GPLv2 of course.  So you can take it and do whatever you want with it.  But if you change it, please share!)

Improving Offline Posting

The best technology is invisible and reliable. You almost forget it’s there, because things just work. Bad technology never disappears into the background — it’s always visible, and worse, it gets in your way. We rarely stop to think “My, what good Wifi!” But we sure notice when the Wifi is iffy.

Good technology in an app requires solid offline support. A WordPress app should give you a seamless, reliable posting experience, and you shouldn’t have to worry whether you’re online or offline while using WordPress Mobile. And if we’ve done our jobs right, you won’t any more.

Getting Started

Our first step was a review of the current offline posting experience, done by a cross functional team composed of designers, QA engineers, and developers.

The process was pretty straightforward: we went offline, started testing, and taking notes. We also took into account support requests and existing bug reports.

We shared our findings, discussed them and came up with a set of offline principles to ensure the consistency needed to make things better for our users.

Consistent Messaging

One of the first issues we noticed was that we were constantly interrupting the user’s workflow with blocking alerts. We also noticed the messaging was inconsistent at best.

The blocking alerts were particularly harmful because they got in the way of the user’s activity without adding much value. They directly clashed with the idea that the best technology should not get in your way.

We removed alerts that weren’t offering any useful choices, replaced the ones that were useful with better contextual messages that display inline within the UI, without blocking the user’s workflow, and standardized all our messaging to be more consistent and clear.

Showing non-blocking contextual information to the user.

Automatic Uploading

More significantly, the offline posting flow was broken. Users were unable to save modifications to posts while offline.

Additionally, some of the blocking alerts were directly tied to the app’s inability to save or publish while offline. They existed because we needed to prevent the users from performing actions that weren’t supported while offline.

To resolve this, we implemented logic to queue upload operations, and replaced the blocking alerts with contextual messaging. Now, the upload queue executes when specific triggers are activated, as shown in the image below.

The triggers that activate the posts uploader.

Playing it Safe

But queueing upload operations can be problematic.

Consider a case where the user makes more changes to a post after publishing it. We needed to make sure that we wouldn’t publish changes that happened after the user tapped “Publish”.

We could have stored a local immutable copy of the post each time it was saved, but the complexity of this solution and its unclear added value made us look for alternatives.

The solution ended up being relatively simple: we generated a hash code based on all of the post fields, and associated that hash with the queued publish operation. When executing the publish operation, the hash has to match the post for it to be published.

But even if the hash doesn’t match and the publish operation is cancelled, the app will auto-save the post so it’s available from other devices.

Additionally, we didn’t want devices that were offline for an extended period to remember offline save operations forever. The solution here was to make sure queued upload operations time-out after a while. This safety mechanism helps us ensure a device that’s offline for a week doesn’t trigger any uploads when going back online.

Final thoughts

This process forced us to put ourselves in our user’s shoes. By making our user’s struggles our own, we came out with a better understanding of what makes for a good offline posting experience.

But that’s not enough!

It’s easy to forget about the offline experience as we’re working on new features. Our most important takeaway is the importance of remaining aware of what happens when the device goes offline and considering the implications for our users.

Because once users stop noticing they’re offline, we’ll know we did a great job.

How to test push notifications in your iOS production app

Testing push notifications can be tricky, especially when an app has been released in the App Store. Luckily,  there are lots of tools available to make this easier. Let’s look at how to set up and use two of them: Houston and Pusher.

A sample push notification

Push notification content is defined by a JSON payload. Here’s a sample payload:

    "aps" : {
   	 "alert" : "Testing push notifications in production",
   	 "badge" : 1,
   	 "sound" : "o.caf"
    "foo" : "bar"

Depending on your feature requirements, your push notification payload may look more complex than this one.

Setting up your environment

Before you can test push notifications, there are a couple of things you’ll need: your device’s push notification token and your app’s push notification certificate.

  1. Get your device’s push notification token. Depending on how your Services layer is set up, this means either a lookup on your API endpoint, or a ping to your DB admin to ask them to find it.
  2. Acquire the APNS production certificate and private key. This will usually be a .p12 file that’s password protected and shared securely among your team members. (This is also a good time to check with your System Admin and make sure the .pem file they were provided had the private key attached. Apple will consider the .pem file invalid without it.) The APNS Sandbox certificate is reserved for working with push notifications in your development environment.
  3. Open the production certificate in Keychain and verify the private key is attached. Double-click the .p12 file and it should open in Keychain automatically.
  4. Use the command line to convert your .p12 into a .pem file: $ openssl pkcs12 -in cert.p12 -out prod-cert.pem -nodes -clcerts
    This command assumes you have openssl already installed. Source.

Testing push notifications using Houston

If you’re comfortable working in the command line, Houston is a powerful and sleek way to test your push notification system with a single command.

Houston is a simple [Ruby] gem for sending Apple Push Notifications. Pass your credentials, construct your message, and send it.

  1. Install Houston. gem install houston
  2. Copy and paste the code found under the Usage section in Houston’s readme into a text file. Save as… a ruby file, e.g.: houston.rb
  3. Add configurations to your ruby file. Add the correct device token and path to your .pem file, and customize the message at a minimum. For added fun, reference your custom notification sound (if you have one). Then save the file. See below for an example.
  4. Go to Terminal and run ruby /path/to/houston.rb

Here’s an example of houston.rb configured for a WooCommerce new order push notification:

require 'houston'

# Environment variables are automatically read, or can be overridden by any specified options. You can also
# conveniently use `Houston::Client.development` or `Houston::Client.production`.
APN = Houston::Client.production
APN.certificate ='prod-cert.pem')

# An example of the token sent back when a device registers for notifications
token = '<ce8be627 2e43e855 16033e24 b4c28922 0eeda487 9c477160 b2545e95 b68b5969>'

# Create a notification that alerts a message to the user, plays a sound, and sets the badge on the app
notification = token)
notification.alert = 'Testing push notifications in production using Houston'

# Notifications can also change the badge count, have a custom sound, have a category identifier, indicate available Newsstand content, or pass along arbitrary data.
notification.badge = 1
notification.sound = 'o.caf'
#notification.category = 'INVITE_CATEGORY'
#notification.content_available = true
#notification.mutable_content = true
#notification.custom_data = { foo: 'bar' }
#notification.url_args = %w[boarding A998]
#notification.thread_id = 'notify-team-ios'

# And... sent! That's all it takes.

And if everything’s set up correctly, you should see…

The end result!

Testing push notifications using Pusher

If you prefer GUIs, Pusher is a small Mac utility with an all-in-one interface for organizing the many details required to send an Apple Push Notification. Yes, it has an unusual icon.

  1. Install Pusher. brew cask install pusher (if you don’t already have Homebrew, go here to install it). Alternatively, you can download the latest binary straight from Github.
  2. Go to Applications and right-click “Open Anyway” to open Pusher1.
  3. Configure Pusher. The Pusher interface is fairly straightforward. Select the drop-down menu and find your production APNS certificate. (You may be prompted for your admin username and password, so that Pusher can access the certificate from Keychain.) Note that Pusher can handle storing multiple device tokens in a nice drop-down menu.
  4. Add the push notification payload to the “Payload” text field.
  5. Select the “Push” button when you are ready to send.

And if everything’s set up correctly, you should see…

Testing push notifications after setting up

In Houston, you can re-send the same push notification any time by opening Terminal and using the command  ruby /path/to/houston.rb. To make edits to your push notification, open houston.rb in your favorite text editor. Save and run the command ruby /path/to/houston.rb to send it again.

In Pusher, you can re-send the same push notification any time by opening the Pusher app and selecting the “Push” button. To make edits to your push notification, edit the payload JSON inside of the “Payload” text field. To send the new push notification, select the “Push” button.

In Summary

Houston can send notifications with a simple command and doesn’t require in-depth knowledge for constructing complex payloads. Pusher finds your certificates for you and doesn’t require you to know how to define file paths, but does expect you to build your own payloads. Now that you have them set up, you can also use them for future work in your dev environment as well.

If you’d like to check out the apps, they’re available on the Google Play Store and the iOS App Store, and you can find all the code on GitHub.

1. If you attempt to simply launch the app, you will likely encounter this warning:

This is happening because you downloaded the app through terminal instead of the App Store. Either right-click the application and choose “Open Anyway” or go the the Apple menu > System Preferences > Security & Privacy and select the “Open Anyway” button.

Offline Principles

Cowritten with Megs Fulton

Many of us design and build apps in air-conditioned offices in major cities, using the latest devices with perfect internet connections. We don’t often think about how apps should work without a strong internet connection.

It’s no wonder so many apps feel clunky or broken with a flaky internet connection, or none at all.

We decided to question ourselves, and to start treating the offline state as a core part of the experience rather than an edge case. While not everything can happen offline, plenty of tasks like writing a blog post should still work regardless of your connection.

Guiding principles

Our principles describe how our apps should behave with a poor connection. They outline what we aspire to achieve, and serve as guidelines for building future features with offline in mind.

1. Nondisruptive, contextual communication

If we detect that a data connection is unavailable, we alert the user when it’s pertinent to their recent actions. We communicate the message so as not to disrupt their workflow.

Example 1: When there is a connection loss we avoid showing a blocking alert. We allow the user to continue writing. But we give them information in context and let them know what to expect.

Example 2: We were showing blocking alerts when we tried to load unavailable data. We’re trying to cache where possible. But when we can’t do so, we fall back to an inline error rather than a blocking alert.

2. Continuous and consistent

The app experience should not differ significantly on a bad network connection. The offline experience is managed and messaged consistently. The behaviours are also similar across devices and platforms so users learn what to expect.

Example 1: Rather than requiring an internet connection to load the user’s posts, we cache them on the device so that the user can still be productive while offline.

Example 2: We don’t want users to have to learn two ways to use our app. Rather than providing different actions while offline, we show users what is possible and allow them to try commit actions.

3. Inspire confidence with reliable outcomes

The user feels confident using the app for important tasks with an unstable connection. When an action can’t be completed due to the connection state, we communicate it clearly.

Example: The user is trying to upload an image. We ensure the data isn’t lost and communicate the outcome of the action.

We believe the offline experience is important. These principles will guide us improving our apps, and hope you find them useful too.

WordPress API Challenges

WordPress powers over 30% of all web sites, but not all websites using WordPress are the same. Since WordPress is open source software that anyone can download and install on their server, there are countless versions of WordPress running unique configurations.

This creates a unique challenge for the WordPress mobile apps: Instead of having a single server with a defined API that we control and manage, we need to support multiple APIs from different WordPress versions and we can never complete trust the data being given back to us.

A short history of WordPress APIs

Why don’t we support a single API in our apps? Wouldn’t that make our lives easier? We could have decided to support only, but that would exclude the broader WordPress community that host their sites on other hosts. Another solution was to only use the API for sites, but that would limit the functionality of the app for the users.

Until recently, WordPress only supported one main API: the XMLRPC API. Yes, you read that right: XML Remote Procedure Calls. Even on, this was the only option available back in the day.

A few years ago on, we implemented Calypso, a brand new user interface that needed a new API to back it. This was the origin of the REST API, which has OAuth authentication and provides JSON responses.

Automattic also develops Jetpack, a plugin that connects instances of self-hosted WordPress to our infrastructure. Jetpack allows self-hosted sites to benefit from features that require server-side infrastructure like push notifications, a media CDN (Content Delivery Network), stats, and centralized management of multiple sites. Using Jetpack as a bridge, we can access those sites using the REST API, but with some limitations.

Finally, with the release of WordPress 4.7, a new REST API was integrated in the core of WordPress. While this was good news, there is still a missing piece to make it a universal API: a central authentication scheme.

On we decided to support this new API. But because of our unique server configuration, we needed to change the base structure of the endpoints. We called this API the REST API v2.

How do we support so many APIs? Layers

While the REST API v2 is the future, there’s no way to migrate every service and sites to this new API in one go – we still need to live with the other APIs for a while. The app needs to support:

  • REST API v1
  • Jetpack
  • REST API v2

How did we solve this in the WordPress for iOS app? Layers! Lots of layers of service classes. Here’s a diagram for the iOS app:

At the bottom level, we have two API objects: WordPressComRestAPI and WordPressXMLRPCAPI. Each of those objects implements the authentication of requests, the creation of request data, the parsing of responses, and base error handling.

Above the API objects we have a Remote layer. It provides a standard protocol interface for each API, but internally it adapts the requests to the correct format and parses the answers depending on the specification of each.

On top of the stack, we have the Service objects that coordinate all the requests to Remote objects and handle the data serialization. Most of the Services are entirely unaware of what kind of remote they are using and rely on the standard interface.

If you’re curious about the code of a Remote and Service objects, you can check out the MediaService class and the MediaServiceRemote class.

What about Jetpack?

At the moment, Jetpack sites use the same API as sites, with some responses handled differently on the remote level for the WordPressCom Rest API.

Version 2 of the REST API is only used when we implement new features that need new endpoints. The differences here are mainly handled on the WordPressComRestAPI object combined with the remote objects.

In the future, we’ll move all our APIs calls to v2 of the REST API and unify all these APIs. This is a long-term plan that will require us to:

  • Audit the current use of V1 endpoint and check if the same functionality/data is available on V2 endpoints.
  • Implement the necessary V2 endpoints changes for any missing data.
  • Port the existing code that uses V1 API to use the equivalent V2 API endpoint.
  • Implement or help implement a secure authentication scheme for sites hosted outside of
  • Remove the existing XMLRPC code and port it to use the new V2 endpoints.
  • Test, test, test.

As you can see, providing an app that can connect to 30% of sites on the web has some complex challenges. While we try to isolate our users as best we can, in practice this often has a significant and misunderstood effect on the UX. Some features, like the ability to edit media settings, can only be available on site because the XMLRPC API doesn’t provide an endpoint for this action. This means limiting the actions for our self-hosted sites but still making the feature  available for our other users. Getting the balance right can be tricky, which is why our team invests a lot of time into our server APIs.

All of the code for the server communication on iOS is open source and available in the WordPressKit repository. Feel free to use it on your projects if you need to talk with WordPress servers!

Want to help take the WordPress apps to the next level? We’re hiring!

WordPress Mobile Apps: the Heartbeat release process

A new great feature, an important bug fix, a UI improvement, a subtle but effective change; whenever the latest development is complete, the next step is to ship it to users.

As developers, we want to deliver improvements quickly, but we have to strike a balance. We like to release bug fixes as soon as possible, but we also want to avoid annoying users with too-frequent updates.

With the WordPress mobile apps (and now also WooCommerce), we’ve found our sweet spot with a two week heartbeat. Every two weeks, we freeze the codebase, stabilize the build with help from our testing community, and then release it on the app stores.

The process

The release train passes every other week and picks up every bug fix and feature completed up to that day. To be ready for release, a new version of the app must fulfill a set of minimum requirements:

  • A stable build with no unverified warnings.
  • The build passes all the automated tests run by continuous integration system.
  • New strings have been submitted to our community translation tool, so we can support as many languages as possible with each update.
  • Every new feature has been properly tested and the whole application has been tested against regressions.
  • A proper app store setup is available to showcase the relevant changes (release notes, screenshots, metadata).

Continuously meeting every item in this checklist is only possible with good development practices in place. We rely on strict version control, a GitFlow branching model, and a community of testers as we ship each iteration.  

Strict version control

The apps’ source code is on GitHub and every piece of new code is reviewed by at least one developer. Additionally, a continuous integration (CI) system checks the style and the build on every pull request (PR).

GitFlow branching model

We use the GitFlow branching model to manage bug fixes and  feature development. It has some useful characteristics:

  • Parallel development: GitFlow makes parallel development straightforward by isolating development of new features from finished work. New development is done in feature branches, and is only merged back into the main body of code when the developers are satisfied  that the code is ready for release.
  • Collaboration: feature branches also make it less difficult for two or more developers to collaborate on the same feature, because each feature branch is a sandbox where the only changes are the changes necessary to get the new feature working. That makes it very easy to see and follow what each collaborator is doing.
  • Release staging area: as new changes as ready, they get merged back into the develop branch, which is a staging area for code that hasn’t  been released. So when the next release is branched off of develop, it will automatically contain all of the new features that have been finished.
  • Support for emergency hotfixes: this model also provides support for emergency fixes via tags. Hotfix branches can be made from a tagged release and used to make an emergency change, safe in the knowledge that the hotfix will only contain the emergency fix. There’s no risk new development is accidentally merged at the same time.

A community of Beta Testers

We have a community of beta testers that receive the new version in advance and send us feedback. This is a very important addition to our own internal testing.

Release schedule

A full release cycle can be divided in three stages:

Code Freeze

On day one of the release cycle, we make the cut: anything not in the development branch will not be included in this release.

Following GitFlow, a new release branch is created from the development branch. From now to the actual release date, this branch will only receive the changes required to finalize and stabilize the new version: things like translations for new strings, and bug fixes  based on beta tester feedback.

After the cut, we run a script that picks up every new string added in the main application and in any dependencies and sends them to GlotPress, the WordPress community’s open source translation tool.

The last step on day one is to create a beta release and to distribute it to beta testers.


After code freeze, there are 12 days dedicated to stabilizing the release.

This is an iterative process: we start gathering feedback from beta users and, if there are any bugs, we correct them and push a new version out. This is repeated until every known issue is fixed.

In the meantime everyone on the team can keep developing new features and merging them into the development branch without being worried about affecting the current release.

Additionally, during this period the translators add translation for the new or updated strings.

Submission and Release

Ideally, by day 14 we are confident that we have a solid version to release to all users.

We download the updated localized strings and create a new build that we push to the app stores.

If needed, we generate new screenshots and update the app description and other metadata.

We always release with a phased rollout, starting to deploy the new version to a small percentage of users and monitoring to ensure that no new crashes are present.

On the same day, we start the process all over again, going back to Day 1 and cutting a new release from the main Git branch.

Release management

As you can imagine, keeping the ball rolling requires some coordination. We have people dedicated to managing and monitoring releases. They are in charge of making sure all the steps are done at the correct time and that every piece of user feedback is acted upon.

We have also built a set of scripts that help simplify the release process. We are continuously improving them as we aim to automate the whole process and provide solid release infrastructure that can be shared by all our apps.

After following this process for some time now, we find that it fits our goals well. It allows us to continuously improve our apps and consistently ship new features and reliability enhancements without annoying our users with too-frequent updates. It allows us to  work with a deadline in mind, without forcing us to ship half-baked features, because the next version is not too far in the future. And it allows the community of beta testers and translators to contribute to every new version. We can’t thank them enough for their dedicated support!

The WordPress and WooCommerce apps are fully open source, so if you’re interested you can follow the release process on GitHub.

Also, if you want to help us make our mobile releases better, you can join our beta programs for iOS and Android.

Hello, World!

Howdy! Welcome to the Automattic Mobile Tech Blog! On this blog we’ll be sharing our  experiences as we develop mobile apps, including cool things we’ve built, challenges we’ve overcome, and the rationale behind our solutions.

If you’re not familiar with Automattic, we’re the company behind, the Jetpack plugin for WordPress, WooCommerce, and much more. We’re also completely distributed, meaning that everybody works from the location they choose. We’re in 69 countries and speak 84  languages! By the way, we’re hiring.

At Automattic, our mobile team works on native iOS and Android apps including WordPress for iOS and Android, WooCommerce, Crowdsignal, and Simplenote. Of those, we spend most of our time working on WordPress.


The WordPress mobile apps are completely free and open source, and while the Automattic team does most of the development, they’re community projects and anybody can contribute. They have a range of features for managing and posting to both self-hosted WordPress sites and and those hosted at This covers roughly 30% of all websites on the internet! The apps are available in more than 30 localizations, have more than 100 contributors, and more than 200k daily active users.

The WordPress apps are some of the biggest open source mobile apps out there, having been in development since 2008. They’re a great way to learn more about mobile development and see how a large distributed team builds and structures complex apps. They’ve even been used by Continuous Integration platforms as a benchmark for their build times!

Most recently, we’ve been working on a mobile version of WordPress’ new block editor (our first foray into React Native); improving system integrations such as search, importing content from other apps, and media handling; improving our onboarding flows and empty states so users know how to get started and get unstuck; and creating a brand new app for WooCommerce so business owners can manage their stores from wherever they are.

We hope you’ll join us as we share our journey. If you’d like to check out the apps, they’re available on the Google Play Store and the iOS App Store, and you can find all the code on GitHub.