This site looks healthier in portrait mode.

Last year my team at Zocdoc was tasked with scaling up our platform that processes real time scheduling updates from doctors across the country. This was an important project to enable the company to address growth and prepare for the future. One decision we made early on was to move from an on-premise datacenter architecture to one that leverages AWS – Chris shared our general approach in a previous blog post.

Architecture aside, we needed to decide on a language to code our new services that would run in AWS. In this post we will share how our team arrived at Scala as the default language for new services in AWS and our experience with it so far.

As much of our monolith at Zocdoc is written in C#, this would have been our default language choice but we chose Scala for several reasons:

  • Access to the JVM Open Source ecosystem: While the .NET ecosystem is improving, the JVM open source ecosystem is still much larger and more active. Using the JVM ecosystem enabled us to reuse existing solutions instead of writing our own, we expect this choice to keep paying off in the future.
  • First class citizen for AWS SDK support: Java and thereby Scala is always one of the first, if not the first, language to receive SDK support for new features.
  • Linux support: Moving from Windows (on which our legacy Monolith runs) to Linux for new services was important to us for many reasons (e.g. cost, native docker support). At the time we made this decision .NET core, a variant of the .NET runtime that is also supporting Linux wasn’t available yet, so sticking to C# was not an option.
  • Scala is a functional language: Having worked over the years with declarative syntax in C# (i.e. Linq), Scala as a functional language was a much more appealing option to the team than Java, which essentially is a very verbose OOP language.

Access to the JVM open source ecosystem

One of the deciding factors when picking our programming language was access to a large ecosystem of open source solutions for problems we might encounter. It can be tempting to try to do it yourself, but more often than not, using an existing library gives you speed and reliability which in turn frees you up to focus on the core business problem you are trying to solve.

Additionally, since Scala gets compiled to Java bytecode running on the Java Virtual Machine (JVM), we can not only use Scala libraries, but also any Java library. This gives us access to the single biggest open source ecosystem out there!

Once we made the decision to use Scala, we then needed to decide on some common frameworks for the various services that we needed to build. This led us to the following open source libraries:

  • Finatra: We decided to use Twitter’s Finatra framework as our main web service framework. This seemed like a safe choice as it has great open source support and its DSL is simple enough to follow especially when compared to some other frameworks out there.
  • Slick: Slick is a modern database query and access library for Scala, abstracting away DB access similar to an ORM, so that we did not have to use inline SQL for RDS access.
  • ScalaTest: Our unit and integration tests are written using ScalaTest. With its human readable DSL syntax ScalaTest nudged us in the direction of a Behavior Driven Development style of testing.
  • Metrics Datadog Reporter: We use Datadog extensively for metrics and alerting at Zocdoc. This Coursera library offered us an easy way to get data to our Datadog agents.
  • AWScala: AWS SDK in Scala – this currently only offers a subset of the full SDK feature set but being a native Scala library results in very clean usage patterns.
  • Flyway: A simple database migration tool – this library enabled us to apply DB scripts for updating our RDS schemas.

First class citizen for AWS SDK support

Java and in turn Scala seems to be at the forefront of Amazon’s mind when it comes to building SDKs for AWS. Although we have noticed improvements in the past year on the SDK update delay for other languages, .NET clients for AWS SDKs, in particular, are still not quite on the same level as the Java and Node SDKs.

Support for Kinesis was also an important factor for us as our initial architecture relied heavily on it. Because of this, ease of implementation for KCL client apps was very important to us. Java is realistically the only choice to make this all work because in languages not based on the JVM (e.g. C#, Node) you would instead have to use the MultiLangDaemon as a workaround which ultimately requires you to install Java on your machine.

Scala is a functional language

Since we were already very familiar and heavily leveraging functional language aspects in C#, we wanted to continue on this path. Java would have been an alternative, since it is somewhat similar to C# – but it suffers from being very verbose and has many features that did not quite improve as rapidly as C# did, e.g. generics support or lambdas. F# was another contender, but of course it would not have enabled us to leverage the JVM open source ecosystem.

Our Experience So Far

Once we made the decision to commit to Scala we wanted to make it easy for new team members to ramp up on our new code base. To simplify this process we started to define best practices in both tooling and coding patterns. We knew this would evolve over time but we wanted to start with the information we already knew when we made this decision.

Build and Development Environment

Tooling was an important decision for us. We knew we wanted to standardize on an IDE in an effort to simplify the ramp-up for our team. There are really only two choices for Scala IDEs that are on par with the features that we’re used to in Visual Studio: Eclipse and IntelliJ. Since we were already quite familiar with Re-Sharper by Jet Brains on Visual Studio, we settled on IntelliJ. Compared to MS Visual Studio / .NET, performance measured in IDE responsiveness and compile time is not great. Part of that is Scala’s fault though. Since our services are fairly small and isolated from each other this was not a show stopper for us, but something to consider if ever planning a large project where compile time will add up quickly. Because of its plugin support for SBT, CloudFormation and YAML, IntelliJ quickly became the one stop shop for developing and deploying our services. We chose SBT as our build tool because it has some nice features like incremental recompilation, an interactive shell within the project, and a docker plugin that made it easy to build containers. We also used the Scalastyle Sbt plugin to document and enforce our coding standards, making it easy for us to keep our code clean as more of our engineers started to develop in Scala.

Ramping up in Scala

We found that several aspects of Scala and its ecosystem might be important for a broader audience to understand:

  • Scala syntax – concise and low ceremony
  • Dependency Injection
  • Pattern Matching
  • Error Handling with Trys
  • Pitfalls

Scala syntax – concise and low ceremony

Since we were already very familiar and heavily leveraging functional aspects in C#, we were adopting a declarative style of programming in Scala from the the beginning. Coming from Linq often there’s an equivalent function in Scala that we can apply – some of this is captured in this Stackoverflow answer.

The real challenge for us was to decide on what we consider good code in Scala – Scala as a language allows you to write the same code in many different ways – too many for its own good. While there are generally good patterns to follow (e.g. prefer immutable data, concise pure functions) we are still calibrating as a team on exactly what language features to use in a given context. This happens daily on pull requests or when developers pair program.

The following example projects all galaxies that have less than five planets into a new sequence – each Scala method has an equivalent in C# Linq.

//C# code
galaxies.Where(galaxy => galaxy.Planets < 5).
Select(galaxy => galaxy.Name);
//Scala code
galaxies.filter(_.planets < 5).map(

Case classes, and Scala’s generally more concise syntax meant even more code brevity – no more boiler plate code for constructors or properties (even though the situation is much improved with C# 7’s record types).

//C# code
public class Planet
  public string Name { get; }
  public int Age { get; }
  public Planet(string name, int age)
    Name = name;
    Age = age;
  public Planet Copy()
  public bool Equals()
//Scala code
case class Planet(name: String, age: Int)

Despite its brevity it’s important to note that idiomatic Scala doesn’t translate initially to developer productivity gains (unless typing speed is your main bottleneck) – the steep ramp up on reading and writing idiomatic Scala may cause you to slow down both when writing the code initially and also when reviewing other developers’ code.

Dependency Injection

One important aspect of developing maintainable code we can trust even after refactoring is testability. To perform tests in isolation we often have to mock out dependencies, so we had to cover this with Scala as well. We evaluated compile time dependency injection solutions like the Cake Pattern and Implicits.

We found the Cake Pattern to be quite verbose with a lot of boilerplate, so in the end decided against using it. While Implicts increase the burden on any code maintainer (e.g. the IDE doesn’t really help you find Implicit definitions actually used), they have their place to reduce boiler plate code. Ultimately, we chose Google Guice as our preferred way to inject dependencies. Google Guice is a run-time constructor based Dependency Injection framework, a concept we were familiar with having used Autofac in our .NET ecosystem for years.

Pattern Matching

Pattern Matching is more than just a better switch case statement. It’s a succinct way to navigate different code paths and check for the “shape” of a value, while avoiding less readable if-else chains. Scala extractors (which come built into case classes) allow us to compose and decompose instances so we can inspect the constituent parts while matching patterns.

The following example prints “We’ve reached!” to the standard output if the input CelestialObject is Earth or the distance in light-years if it’s a star.

//Scala code
  def CanWeLandSpaceship(input: CelestialObject) = {
    input match {
      case Planet("Earth",_) => println("We've reached!")
      case Star(_,lightyears) => println(s"We're $lightyears light-years from Earth")

Error handling with Trys

In C# and other imperative languages the usage of exception handling in regular control flow is frowned upon. Not only does it make the code much less readable, but in some cases it will perform much worse and is just a bad practice in general for routinely expected behavior. The solution in imperative languages is to use other control structures designed to solve the problem and avoid the use of exception handling to change the control flow.

In Scala, Trys alleviate this problem by representing computations that were either successful (Success[A], wrapping the result of the computation) or unsuccessful (Failure[A], wrapping a throwable)

The following example divides one integer by another and prints the result to the standard output, or prints a failure message if it tried to divide by zero.

//C# code
// What you'd have to type out in C#
public static void Divide(int numerator, int denominator)
    var result = numerator / denominator;
    Console.WriteLine($"Successfully divided {numerator} " +
                      $"by {denominator} to get {result}");
  catch (DivideByZeroException e)
    Console.WriteLine($"Failed because {e.Message}");

Divide(6, 3); //Successfully divided 6 by 3 to get 2
Divide(6, 0); //Failed because Attempted to divide by zero.
//Scala code
import scala.util.{Failure, Success, Try}

def Divide(numerator: Int, denominator: Int) = {
  Try(numerator/denominator) match {
    case Success(value) => println(s"Successfully divided " +
      s"$numerator by $denominator to get $value")
    case Failure(e) => println(s"Failed because of $e")

Divide(6,3)   // Successfully divided 6 by 3 to get 2
Divide(6,0)  // Failed because of java.lang.ArithmeticException: / by zero


As much as we love Scala as a language, like most things, it is not perfect. As our codebase started to evolve we discovered several things that we did not anticipate in the beginning and would have been good to know when we originally made the decision to use the language.

Enumerations in Scala
When we started to include Enums in our API contracts, we hit a problem right away. Coming from .NET we were used to being able to send an input string that would deserialize into its corresponding Enumeration type. The string “Phobos” for example would deserialize to an Enumeration type of MoonsOfMars thanks to some nifty string-to-Enum converters. The idea being that the string value “Phobos” is more descriptive than its underlying numerical enumeration value, from the consumer’s point of view. To achieve the same thing with ADTs in Scala, we needed custom deserialization boilerplate, which might look something like this in it’s most rudimentary form.

//Scala code
  sealed trait MoonsOfMars
  object MoonsOfMars {
    case object Phobos extends MoonsOfMars
    case object Deimos extends MoonsOfMars
    val PhobosId: Byte = 1
    val DeimosId: Byte = 2
    def apply(MoonsOfMarsValue: Byte): MoonsOfMars =
      MoonsOfMarsValue match {
        case PhobosId => Phobos
        case DeimosId => Deimos
        case _ => throw new
            IllegalArgumentException(s"Illegal Byte value for MoonsOfMars: " +
    def apply(MoonsOfMarsValue: String): MoonsOfMars =
      MoonsOfMarsValue match {
        case "Phobos" => Phobos
        case "Deimos" => Deimos
        case _ => throw new
            IllegalArgumentException(s"Illegal String value for MoonsOfMars: " +
    def unapply(MoonsOfMars: MoonsOfMars): Byte = MoonsOfMars match {
      case Phobos => PhobosId
      case Deimos => DeimosId
  class MoonsOfMarsSerializer extends Serializer[MoonsOfMars] {
    private val MyMoonsOfMars = classOf[MoonsOfMars]
    def deserialize(implicit format: Formats):
      PartialFunction[(TypeInfo, JValue), MoonsOfMars] = {
      case (TypeInfo(MyMoonsOfMars, _), json) => json match {
        case JObject(JField("Type", JString(x)) :: _) =>
        case x => throw new
            MappingException("Can't convert " + x + " to MoonsOfMars")
    def serialize(implicit formats: Formats): PartialFunction[Any, JValue] = {
      case x: MoonsOfMars =>
      case y: Array[Byte] =>


That said, future generations of Scala might address this issue. Using Java Enums is another viable option.

Learning Curve
There is a considerable learning curve with Scala, especially when coming from an imperative programming language background. The effort to ramp up in Scala should not be underestimated. Everyone learns differently, some developers faster than others. This made level of effort estimates vary more widely and team velocity more challenging to predict in this transition period, which in turn made it more difficult to consistently deliver on our projects.

In Summary

The language ecosystem at Zocdoc has evolved from a C# mono culture to supporting a trio of languages – C#, Scala and Node. Each of these languages has unique properties and choosing one over the other is now a team level decision at Zocdoc based on picking the right tool for the job and developer preferences.

Overall, we consider our transition to Scala a success. In the past few months we have established a set of patterns, standardized frameworks, libraries and tools to use when developing in Scala. Our work helped define a low-effort happy path that teams at Zocdoc can now follow, lowering the bar for language adoption of Scala within the company. This includes writing code, testing, app containerization, deploying to our CI pipeline and production environments, and metrics and alerting.

Despite the many challenges we faced, we successfully delivered our first few projects in Scala and continue to refine our approach.

About the author

Anupam Burra is a Senior Software Engineer on the Synchronizer Team at Zocdoc. He’s passionate about Distributed Systems and Progressive Rock music.

Show comments (7)

Close comments

7 responses to “Migrating to Scala”

  1. Jeffrey Aguilera says:

    You should have used MacWire for DI.

    • Anupam Burra says:

      Hey Jeffrey, thank you for your comment. We wanted to pick the simplest tool for the job, thereby unblocking the several projects that would have been bottle-necked by this. Google Guice was attractive to us for a couple of reasons, first it came out of box with Finatra; Finatra exposes a way for its developers to register the concrete implementations – the boiler plate for which is minimal. Secondly it’s also very familiar as a pattern for engineers at Zocdoc (coming from .NET where we used Autofac, another runtime constructor based DI framework). As more functionality and thereby dependencies got added to our services, we found that engineers were quickly able to on-board to this pattern.
      There are many interesting flavors of Dependency Injection Solutions out there for Scala, which have trade-offs for various technical challenges – Compile time safety, boiler plate, ease of use etc. We found picking the right solution to not just be a function of how well it addresses these technical problems but also how well it works in the context of the level of maturity of the organization when it comes to the language, the size of the team, which ultimately determines the ease of adoption for the broader technology technology team as a whole. All that said, MacWire is yet another very interesting way of solving the problem of Dependency Injection, beyond those already mentioned in this blog post. Thank you for pointing that out and I hope that makes sense!

  2. Sung Kim says:

    How do you guys keep developers up to date with the new language, Scala? The transition from .NET to Java seems to be low but Scala being a functional language, the hurdle must have been high.

    • Anupam Burra says:

      Hey Sung, thanks for your comment. Given the fact that we were already slightly familiar with a functional style of programming in C# we had a slightly higher jumping off point. That said, we do provide all engineers at Zocdoc a support system, that enables them to become productive Scala Engineers at Zocdoc; Firstly, our Pull Requests which get cross-team commentary from engineers in Zocdoc that actively develop in Scala, are a great place for debates and discussions on best practices and patterns as they apply directly to practical problems that are being solved. Secondly, we have an internal Scala Guild – a meetup of engineers at all levels of Scala expertise – where we deep-dive and code-along to problems that best explain a chosen curated topic for that session. We also have distribution lists and chat rooms where Engineers will regularly put up articles on new frameworks and patterns. We also provide engineers all the necessary resources to on-board. This involves links to Scala courses, text books, and on-premise training sessions from experts in the field. We have also experimented with having Engineers do a rotation on a team that is more familiar with Scala to learn more about the best practices in the context of writing production quality Scala code. All of these channels reduce the barrier to entry for new engineers who want to write Scala at Zocdoc, even though it was – and still is – a challenging learning curve for sure. I hope that answers the question.

  3. John Jimenez says:

    did you get a chance to look into enumeratum (

    • Anupam Burra says:

      Hi John,

      Thanks for your comment. The short answer is we’re actively evaluating Enumeratum as an option!

Leave a Reply

Your email address will not be published. Required fields are marked *

You might also like