February 23, 2024

At Slack, the objective of the Cellular Developer Expertise Workforce (DevXp) is to empower builders to ship code with confidence whereas having fun with a nice and productive engineering expertise. We use metrics and surveys to measure productiveness and developer expertise, similar to developer sentiment, CI stability, time to merge (TTM), and take a look at failure charge.

Now we have gotten a variety of worth out of our deal with cell developer expertise, and we predict most corporations under-invest on this space. On this publish we are going to talk about why having a DevXp workforce improves effectivity and happiness, the price of not having a workforce, and the way the workforce recognized and resolved some widespread developer ache factors to optimize the developer expertise.

How it began

A number of cell engineers realized early on that engineers who had been employed to put in writing native cell code may not essentially have experience within the technical areas round their developer expertise. They thought that if they may make the developer expertise for all cell engineers higher, they may not solely assist engineers be extra productive, but additionally delight our clients with quicker, higher-quality releases. They acquired collectively and shaped an ad-hoc workforce to handle the most typical developer ache factors. The cell developer expertise workforce has grown from three individuals in 2017 to eight individuals right now. In our 5 years as a workforce, we now have centered on these areas:

  • Native growth expertise and IDE usability
  • Our rising codebase. Guaranteeing visibility into problematic areas of the codebase that require consideration
  • Steady Integration usability and extensibility
  • Automation take a look at infrastructure and automatic take a look at flakiness
  • Maintaining the primary department inexperienced. Ensuring the newest fundamental is at all times buildable and shippable

The price of not investing in a cell developer expertise workforce

A cell engineer normally begins a function by making a department on their native machine and committing their code to GitHub. When they’re prepared, they create a pull request and assign it to a reviewer. As soon as a pull request is opened or a subsequent commit has been added to the department, the next CI jobs get kicked off:

  • Jobs that construct artifacts
  • Jobs that run exams
  • Jobs that run static evaluation

As soon as the reviewer approves the pull request and all checks move on CI, the engineer may merge the pull request in the primary department. Right here is the visualization of the developer stream and the stream interruptions related to every space.

Here’s a tough estimate of the price of some developer ache factors and the price to the corporate for not addressing these ache factors because the workforce grows:

Whereas builders can study to resolve a few of these points, the time spent and the price incurred shouldn’t be justifiable because the workforce grows. Having a devoted workforce that may deal with these downside areas and figuring out methods to make the developer groups extra environment friendly will make sure that builders can preserve an intense product focus.

Strategy

Our workforce companions with the cell engineering groups to prioritize which developer ache factors to deal with, utilizing the next strategy:

  • Hearken to clients and work alongside them. We are going to associate with a cell engineer as they’re engaged on a function and observe their challenges.
  • Survey the builders. We conduct a quarterly survey of our cell engineers the place we monitor basic Internet Promoter Rating (NPS) round cell growth.
  • Summarize developer ache factors. We distill the suggestions into working areas that we are able to break up up as a workforce and sort out.
  • Collect metrics. It will be significant that we measure earlier than we begin addressing a ache level to make sure that an answer we deploy truly fixes the problem, and to know the precise influence our resolution had on the issue space. We provide you with metrics to trace that correlate with the issue areas builders have and monitor them on dashboards. This permits us to see the metrics change over time.
  • Spend money on experiments that enhance developer ache factors. We are going to consider options to the issues by both consulting with different corporations that additionally develop at this scale, or by arising with a novel resolution ourselves.
  • Think about using third-party instruments. We consider whether or not it makes extra sense to make use of present options or to construct out our personal options.
  • Repeat this course of. As soon as we launch an answer, we have a look at the metrics to make sure that it strikes the needle in the correct route; solely then will we transfer onto the following downside space.

Developer pains

Let’s dive into some developer ache factors so as of severity and look at how the cell developer expertise workforce addressed them. For every ache level, we are going to begin with some quotes from our builders after which define the steps we took.

CI take a look at jobs that take a very long time to finish

When a developer has to attend a very long time for exams to run on their pull requests, they change to engaged on a distinct job and lose context on the unique pull request. When the take a look at outcomes return, if there is a matter they should tackle, they must re-orient themselves with the unique job they had been engaged on. This context switching takes a toll on developer productiveness. The next are two quotes from our quarterly cell engineering survey in 2018.

 

Quicker CI time! I believe that is requested lots, however it could be superb to have this improved

Jenkins construct occasions are fairly excessive and it could be nice if we are able to scale back these

From 1 to 10 builders, we had a few hundred exams and ran all of them serially utilizing Xcodebuild for iOS and Firebase Check lab for Android.

Operating the exams serially labored for a few years, till the take a look at job time began to take virtually an hour. One of many options we thought of was introducing parallelization to the take a look at suites. As an alternative of working the entire exams serially, we may break up them into shards and run them in parallel. Right here is how we solved this downside on the iOS and Android platforms.

iOS 

We thought of writing our personal software to realize this, however then found a software referred to as Bluepill that was open sourced by Linkedin. It makes use of Xcodebuild beneath the hood, however added the power to shard and execute exams in parallel. Integrating Bluepill decreased our whole take a look at execution time to about 20 minutes.

Utilizing Bluepill labored for a number of extra years till our unit take a look at job began to as soon as once more take virtually 50 minutes. Slack iOS engineers had been including extra take a look at suites to run, and we may not merely rely solely on parallelization to decrease TTM.

How shifting to a contemporary construct system helped drive down CI job occasions

Our subsequent technique was to implement a caching layer for our take a look at suites. The objective was to solely run the exams that wanted to be run on a particular pull request, and return the remaining take a look at outcomes from cache. The issue was that Xcodebuild doesn’t assist caching. To implement take a look at caching we wanted to maneuver to a distinct construct system:s Bazel. We utilized Bazel’s disk cache on CI machines so builds from totally different pull requests can reuse construct outputs from one other person’s construct somewhat than constructing every new output regionally.

Along with the Bazel disk cache, we use the bazel-diff software that enables us to find out the precise affected set of impacted targets between two Git revisions. The 2 revisions we evaluate are the tip of the primary department, and the final commit on the builders department. As soon as we now have the listing of targets that had been impacted, we solely take a look at these targets.

With the Bazel construct system and bazel-diff, we had been capable of lower TTM to a median of 9 minutes, with a minimal TTM  of 4.5 minutes. This implies builders can get the suggestions they want on their pull request quicker, and extra shortly get again to collaborating with others and dealing on their options.

Android 

Within the early days, TTM was round 50 minutes, and Firebase Check Lab (FTL) didn’t have take a look at sharding.  We constructed an in-house take a look at sharder on prime of FTL referred to as Gas to interrupt exams into a number of shards and name FTL APIs to run every take a look at shard in parallel. This introduced TTM from 50+ minutes to beneath 20 minutes. Right here is the excessive stage overview:

We continued utilizing Gas for 2 and a half years, after which moved to an open supply take a look at sharder referred to as Flank. We proceed to make use of Flank right now to run Android useful and end-to-end UI exams.

Check-related failures 

When a examine fails on a pull request due to flaky or unrelated take a look at failures, it has the potential to take the developer out of stream, and probably influence different builders as effectively. Let’s check out a number of culprits inflicting non-related pull request failures and the way we now have addressed them.

Fragile automation frameworks

From 2015 to early 2017, we used the Calabash testing framework that interacted with the UI and wrapped that logic in Cucumber to make the steps human readable. Calabash is a “blackbox” take a look at automation framework and desires a devoted automation workforce to put in writing and handle exams. We noticed that the extra exams that had been added, the extra fragile the take a look at suites turned. When a take a look at failed on a pull request, the developer would attain out to an Automation Engineer to know the failure, try to repair it, then rerun it once more and hope that one other fragile take a look at doesn’t fail their construct. This resulted in an extended suggestions loop and elevated TTM.

Because the workforce grew we determined to maneuver away from Calabash and switched to Espresso as a result of Espresso was tightly coupled with the Android OS and can also be written within the native language (Java or Kotlin). Espresso is highly effective as a result of it’s conscious of the interior workings of the Android OS and will interface with it simply. This additionally meant that Android builders may simply write and modify exams as a result of they had been written within the language they had been most comfy with. A number of advantages to spotlight with migrations:

  • This helped to shift testing accountability from our devoted automation workforce to builders, to allow them to write exams as wanted to check the logic within the code
  • Testing time went from ~350 minutes to ~60 minutes after we moved from Calabash to Espresso and FTL

Flaky exams

In early 2018 the developer sentiment in the direction of testing was poor and brought on a variety of developer ache. Listed here are couple of quotes from our developer survey:

 

Flimsy exams are nonetheless a bottleneck generally. We must always have a greater means monitoring them and ping the proprietor to repair earlier than it causes an excessive amount of friction

Flaky exams sluggish me all the way down to a halt – there must be a extra streamlined course of in place for continuing with PR’s as soon as flaky exams are discovered (as a substitute of blocking a merge because it occurs now)

At one level, 57{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf} of the take a look at failures in our fundamental department had been resulting from flaky exams and the proportion was even increased on developer pull requests. We spent a while studying about flaky exams and managed to get them beneath management in recent times by constructing a system to auto-detect and suppress flaky exams to make sure developer expertise and stream is uninterrupted. Here’s a detailed article outlining our strategy and the way we diminished take a look at failures charge from 57{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf} to 4{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf} 

CI-related failures

For a few years we used Jenkins to energy the cell CI infrastructure, utilizing Groovy-based .jenkinsfiles. Whereas it labored, it was additionally the supply of a variety of frustration for builders. These issues had been essentially the most impactful:

  • Frequent downtime
  • Diminished efficiency of the system
  • Failure to choose up Git webhooks, and subsequently not beginning pull request CI jobs
  • Failure to replace the pull request when a job fails
  • Issue in debugging failures resulting from poor UX

After flaky exams, CI downtime was the most important bottleneck negatively impacting the cell workforce’s productiveness. Listed here are some quotes from our builders relating to Jenkins:

 

Want extra dependable hooks between the jenkins CI and GitHub. When issues do go improper, there are generally no hyperlinks in GH to go to the correct place. Additionally, generally CI passes however would not report again to GH so PR is caught in limbo till I manually rebuild stuff

Jenkins is a ache. Take away the Blue Ocean jenkins UI that’s complicated and everybody hates

Jenkins is a large number to me. There are too many hyperlinks and I solely care about what broke and what button/hyperlink do I have to click on on to retry. Every part else is noise

After utilizing Jenkins for greater than six years, we migrated away from it to BuildKite, which has had 99.96{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf} uptime to this point. Webhook-related points have fully disappeared, and the UX is straightforward sufficient for builders to navigate with no need our workforce’s assist. This has not solely improved developer expertise but additionally decreased the triage load for our workforce.

The rapid influence of the migration was an 8{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf} enhance in CI stability from ~87{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf} to 95{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf}  and diminished Time to Merge by 41{f65efea6d27829be98a14bd166213de8b08b157d7769decbd4c759b4a6936bdf} from ~34 minutes to ~20 minutes

Merge conflicts

Battle whereas including new modules or recordsdata to the Xcode challenge for iOS 

Because the variety of iOS engineers at Slack grew previous 20, one space of fixed frustration was the checked in Xcode challenge file. The Xcode challenge file is an XML file that defines the entire Xcode challenge’s targets, construct configurations, preprocessor macros, schemes, and far more. As a small workforce, it’s simple to make modifications to this file and commit them to the primary department with out inflicting any points, however because the variety of engineers will increase, the probabilities of inflicting a battle by making a change on this file additionally will increase.

 

“I believe the priority is extra so the xcode challenge file, resolving conflicts on that factor is painful and error susceptible. I’m undecided what the very best strategy is to assuaging this potential ache level, particularly if they’ve added new code recordsdata.”

“I had a dozen or so conflicts within the challenge file that I needed to manually resolve. Not an enormous challenge in itself however once you’re anticipating to merge a PR it may be a shock”

The answer we carried out was to make use of a software referred to as Xcodegen. Xcodegen allowed us to delete the checked in .xcodeproj file and create an Xcode challenge dynamically utilizing a YAML file that contained definitions of all of our Xcode targets. We related this software to a command line interface in order that iOS engineers may create an Xcode challenge from the command line. One other profit was that the entire challenge and goal stage settings are outlined in code, not within the Xcode GUI, which made the settings simpler to seek out and edit.

After adopting Bazel we took it a step additional and created the YAML file dynamically from our Bazel construct descriptions.

A number of concurrent merges to fundamental have the potential to interrupt fundamental

To date we now have talked about totally different points that builders can expertise when writing code regionally and opening a pull request. However what occurs when a number of builders are attempting to land their pull requests to the primary department concurrently? With a big workforce, a number of merges to fundamental occur all through the day which may make a developer’s pull requests stale shortly. The longer a developer waits to merge, the bigger the prospect of a merge battle.

An growing variety of merge conflicts began inflicting the primary department to fail resulting from concurrent merges and began to negatively have an effect on developer productiveness. Till the merge battle is resolved, the primary department would stay damaged and pause all productiveness. At one level merge conflicts had been breaking the primary department a number of occasions a day. Extra builders began requesting a merge queue.

 

We preserve breaking the primary department. We’d like a merge queue.

We brainstormed totally different options and finally landed on utilizing a 3rd get together resolution referred to as Aviator, and mixed it with our in-house software Mergebot. We felt that constructing and sustaining a merge queue could be an excessive amount of work for us and that the very best resolution was to depend on an organization that was spending all of their time engaged on this downside. With Aviator, builders add their pull request to a queue as a substitute of immediately merging to the primary department, and as soon as within the queue, Aviator will merge fundamental into the developer branches and run the entire required checks. If a pull request was discovered to interrupt fundamental, then the merge queue rejects it and the developer is notified through Slack. This method helps keep away from any merge conflicts.

 

Manner higher now with Aviator. Solely ache level is I am unable to merge my pull requests and must depend on Aviator. Aviator takes hours to merge my PR to grasp. Which makes me anxious.

Being an early adopter means you get some advantages but additionally some ache. We labored carefully with the Aviator workforce to establish and tackle developer pains similar to elevated time to merge a pull request in the primary department and failure reporting on a pull request when it’s dropped out of queue resulting from a battle.

Checking pull request progress/standing

This can be a request we obtained in 2017 in one among our developer surveys:

 

Would actually love well timed alerts for PR assignments, feedback, approvals and many others. Additionally could be good if we may get a DM if our builds move (somewhat than solely the alert for once they fail) with the choice to merge it proper there from slack if we now have all of the wanted approvals.

Later within the 12 months we created a service which screens Git occasions and sends Slack notifications to the pull request writer and pull request reviewer accordingly. The bot is called “Mergebot” and can notify the pull request writer when a remark is added to their pull request or its standing modifications. It is going to additionally notify the pull request reviewer when a pull request is assigned to them. Mergebot has helped shorten the pull request evaluation course of and preserve builders in stream. That is one more instance of how saving simply 5 minutes of developer time saved ~$240,000 for a 100-developer workforce in a 12 months.

Not too long ago github rolled out an analogous function referred to as “github scheduled reminder” which, as soon as opted into, notifies a developer of any PR replace by way of Slack notification. Whereas it covers the fundamental reminder half, Mergebot continues to be our developer’s most popular bot because it doesn’t require express opt-in and likewise permits pull requests to be merged by way of a click on of the button by way of Slack.

Conclusion

We wish Slack to be the very best place on the earth to make software program, and a method that we’re doing that’s by investing within the cell developer expertise. Our workforce’s mission is to maintain builders within the stream and make their working lives simpler, extra nice, and extra productive.  Listed here are some direct quotes from our cell builders:

 

Dev XP is nice. Thanks for at all times taking suggestions from the cell growth groups! I do know you care 💪

We’re utilizing fashionable practices. Bazel is nice. I really feel extremely supported by DevXP and their arduous work.

The instruments work effectively. The code is modularized effectively. Devxp is responsive and useful and continues to iterate and enhance.

Are some of these developer expertise challenges attention-grabbing to you? In that case, join us!