Link to wealthfront.com

Fork me on GitHub

Wednesday, February 11, 2015

Pattern matching in Java with the Visitor pattern

I once read an essay—I cannot find it now—that talked about how learning a more advanced programming language can improve your coding even in a language that lacks those features because those language features are really ways of thinking, and there's no reason you cannot think in terms of more powerful abstractions just because you need to express them in a more limited language. The phrase that stuck with me was that you were "writing into" that language, rather than writing in the language as it's meant to be used. At Wealthfront, while the majority of our backend code is Java, we use a variety of methods that originate in functional languages. We've written before about our Option. This article is about pattern matching in Java.

I'm going to take a digression into explaining what pattern matching is and why it's so fantastic. If you like, you can also skip ahead to the actual Java examples

Inspiration from post-Java languages

Pattern matching is a feature common in modern functional languages that allows a structure similar to switch but on the type of the argument. For example, let's take a base class that might have two subclasses, and we want to write logic that handles the two subclasses differently. An example might be a payment record that varies in type according to the method of payment (e.g. Bitcoin vs. credit card). Or maybe an owner that varies depending on whether it represents a single user or a group. This is useful for any class hierarchy representing some sort of data that has a base set of fields and subclasses that may have other data.

The visitor pattern was around before Haskell and Scala wowed everyone with pattern matching, but seeing pattern matching makes it easier to see why it's useful.

Scala pattern matching

Scala supports a match operator that concisely expresses the idea of switching on the type of the object.

What does this do? First, we have an abstract class Foo with two subtypes Bar and Baz. These have different parameters. Bar stores an Int, Baz a String. Then we have a handle method that uses match to extract the appropriate field from either of these. Each matched case could have whatever logic you may need, and the type of b in both of these is the specific subclass, not just Foo.

Swift enums

Swift offers this same functionality through enums, which behave more like types than values.

The syntax differs, but it works out to the same. The let keyword is a helpful reminder of what's going on here -- a new variable is created holding the same value that was in f but now it's of the specific type.

Simpler java solutions

instanceof

One simple solution that comes to mind is to use instanceof.

There are a few problems with this. 1. because Hibernate wraps the class in a proxy object, and then intercepts method calls to make sure the right code is called, objects loaded from Hibernate will never be instances of the derived type, and this will not work. 2. Correctness is not enforced by the compiler. It's perfectly valid Java to say if (f instanceof Bar) Baz b = (Baz) f;, but it will fail every time. 3. Lastly, there is no way to ensure completeness. There's nothing I can do to the existing code to make sure that it gets updated when someone adds a new subtype Qux.

Moving logic into the class

Another solution is to embed this logic in the class, like OOP says we should.

This works fine when it's one method, or a few, but what happens when it grows to dozens? It means that your data objects start to be the location of all your business logic. If that's your style, that's okay and you have a lot of company, but I find it difficult to think about things this way. For example, if I want to verify the identity of owners of individual, joint, and custodial accounts, I could put a "verify identity" method in the AccountOwner type, but I'd prefer to create a single IdentityVerifier class that encapsulates all the business logic about verifying identity. The visitor pattern fits in a model where data objects are simple and business logic in primarily implemented in various processor or "noun-verber" classes.

Another issue with business logic in the data class is that it makes it harder to mock for testing. With a processor interface, it's easier to mock it and return whatever data you want. With business logic in the class, you need to either set up the class so that your data actually satisfies all those rules, or you need to override the methods to return what you want. It makes it harder than it should be to write a test saying something like "accounts whose owners can be identified may be opened immediately".

The visitor pattern in Java

Basic Visitor

The basic visitor pattern in java consists of the following:

  • An abstract base class with an abstract method match or visit taking a parameterized Visitor.
  • A parameterized Visitor class with a case method for each subclass.
  • Subclasses of the base class that each call the appropriate method of the visitor.
  • Application code that creates an anonymous instance of the visitor implementing whatever behavior is desired for that case.

Default visitor

It's sometimes useful to have special logic for some of the subclasses, and a default value for others. This can make the code more readable because it removes boilerplate which isn't part of what the code is trying to accomplish. You can do this with an implementation of the interface that supplies a default value for anything not overridden.

The downside of this pattern is that updating default visitors can be overlooked when a new case is added. One way to handle this in practice is while adding the new case, make the default visitor abstract without implementing the new case, review all the code that breaks, and once satisfied that the behavior is correct, adding in the default implementation for the new case.

Void or Unit return values

We generally define our visitors as being parameterized by a return type, but sometimes no return value is needed. At Wealthfront we have a Unit type with a singleton Unit.unit value to represent a return value that isn't meaningful, but java.lang.Void is also used.

I've used Void in this example to avoid intordu I feel compelled to link to a discussion of why this is not ideal from a functional perspective: void vs. unit.

Destructuring pattern matching

These make up all that you likely need and probably 90% of our use of the visitor pattern, but there's one more item that is worth mentioning. My Scala example above doesn't actually show the full power of pattern matching, because I'm just matching on the type. With case classes, or with a custom unapply method, I can actually match not just on the types of the objects, but on details of their internal structure. For example, using types similar to what I used before, here's a version that treats anything above 10 as "many".

Since this is a language feature in Scala, it's flexible and easy to use. You can simulate the same behavior in Java, but you need to encode the cases that are allowed into the visitor itself.

In some sense, 16 vs. 58 lines is a big difference, but you could also argue that 42 lines of additional boilerplate to simulate this powerful functionality in Java is worth it. This destructuring pattern matching is most useful for value types. That is, objects that just represent collections of data but don't have any other identity attached to them. For entity types, that represent something that is defined as itself regardless of what values it currently has, it's better to use the basic pattern matching.

Is this really pattern matching, and how useful is it?

Some might object that this isn't "really" pattern matching, and I would agree. Pattern matching is a language level feature that allows you to operate on subclasses in a type-safe way (among other things). The type-safe visitor pattern allows you to operate on subclasses in a type-safe way even without language support for pattern matching.

As to its utility, I can say that we use it extensively at Wealthfront, and once people become familiar with it, it's great. Pretty much every polymorphic entity will have a visitor, which makes it safe for us to add new subtypes, since the compiler will let us find all the places we need to make sure it's handled. Visitors on value types, especially destructuring visitors, are much less common. We use it in a few places for things like Result objects that represent a possible result or error.

Give it a try the next time you run into a ClassCastException in your code.

Friday, January 9, 2015

WF-CRAN: R Package Management at Wealthfront

R is a powerful statistical programming language with a large range of built-in capabilities that are further augmented by an extensive set of third-party packages available through the Comprehensive R Archive Network (CRAN). At Wealthfront, R is an important tool with many use cases across the company. These include analyzing our business metrics to enable our data-driven company culture and conducting research for our investment methodology, including our recently announced Tax-Optimized Direct Indexing.

Limitations Of CRAN

To support the multiple use cases of R at Wealthfront, we rely on a large number of packages, both third-party packages available through CRAN and packages we have developed internally. The wide availability of existing packages and the ease of creating new ones enables powerful and sophisticated research and analysis, but the management of these packages can be difficult.

For third party packages, CRAN makes it easy to quickly obtain and install new packages, but parts of its design are problematic when used in business-critical settings. At Wealthfront, we faced the following challenges when using CRAN.

Archiving Of Packages And Lack Of Versioning Makes Reproducibility Difficult

On any day, new packages may be added to CRAN, old packages archived, and existing packages updated. Although individual packages themselves are versioned, only the latest package versions are available for download and installation using the built-in install.packages function. By default install.packages downloads the latest package and overwrites any existing older version of the package. The library function then loads the version that was last downloaded and installed.

The ephemeral state of CRAN poses multiple problems. First, the lack of explicit versioning when installing and loading packages makes reproducing analyses and research difficult. The exact results obtained may depend on the specific package versions used and these may vary from machine to machine. Changes to packages may also not be backward compatible, often breaking existing tools and scripts. Further, packages that have been archived on CRAN can no longer be installed using install.packages and must instead be directly downloaded and then installed from the source tarball.

Inconsistent Installation And Dependency Management For Proprietary Vs. Third-Party Packages

In addition to the third-party packages we use at Wealthfront, we also have a large number of internally developed packages we depend on. These packages are proprietary and so cannot be uploaded to CRAN. Having some packages from CRAN and others from private repositories results in inconsistent installation processes for both sets of packages. For third-party packages from CRAN, built-in R functions such as install.packages can be used, but for proprietary packages either built-in R tools such as R CMD INSTALL or methods such as install_git and install_local from the devtools package must be used. Further, R CMD INSTALL and devtools do not handle dependencies between proprietary packages, so the packages must be manually installed in an order that satisfies the dependency chain.

Rather than have packages in two separate locations and different installation and dependency management procedures, we want all packages to be accessible from a single location with consistent steps for installing packages and managing dependencies.

Inability To Review And Document Use Of Third-Party Packages

While there are a large number of packages available on CRAN, the quality of these packages can vary greatly. The quality of any research or analysis is dependent on the packages used and so we must ensure that the packages we rely on are well-written, maintained, and well tested. Thus, for any analysis or research that is production or business critical, we must ensure that we have vetted the packages being used.

WF-CRAN

We developed an internal CRAN with these limitations in mind. WF-CRAN holds the third-party packages we depend on along with our internally-developed packages. The following describes our design choices for implementing WF-CRAN and how these choices addressed the challenges we faced with CRAN.

Repository Versioning

There has been much community discussion and debate about the lack of CRAN package versioning (see this discussion about having a release and development distribution of CRAN and this R Journal article discussing options for introducing version management to CRAN). With WF-CRAN, we took a versioned repository approach. Each time a package is added or modified in WF-CRAN, a new version of the repository is created that includes the latest version of the packages contained in the repository. With this approach, we can continue to use the existing R functions for managing and loading packages, including install.packages and update.packages, by explicitly specifying the repository version using the repos argument. For example, the package xts from version 1.81 of WF-CRAN can be installed using:

install.packages(xts, repos = "http://wf-cran/1.81")

Further, dependencies of packages, both third-party and proprietary, can be automatically installed by specifying the dependencies argument for install.packages.

CRAN Nexus And Maven Plugins

Rather than write much of the server software from scratch, we chose to implement WF-CRAN on top of Sonatype Nexus by writing a CRAN Nexus plugin. Together with an internally developed CRAN Maven plugin, new packages and package versions can easily be deployed by navigating to the directory containing the package and running the command mvn deploy. Maven and our Maven CRAN plugin are then responsible for running the R CMD build command to create the package source archive and uploading this to WF-CRAN. The Nexus plugin then creates the repository structure and PACKAGES file for the new versioned snapshot of the repository by parsing the package DESCRIPTION files and creating symlinks to the latest versions of each package. We currently only upload the package source to WF-CRAN and so all packages must be installed from source. This is made the default for install.packages by setting

options(pkgType = "source")

in each user’s .Rprofile.

Source Control Of Packages

The source for the packages themselves are kept in Git repositories. We have multiple repositories containing R packages, with each repository containing packages that support a set of related use cases, such as research or business analytics. We also have a repository that holds the source code for all of the third-party packages we use. This allows us to document which packages we depend on and to explicitly approve the use of new third-party packages by code reviewing the addition of the packages to the third-party repository. Although packages are contained in multiple repositories, once the packages are deployed to WF-CRAN, they can all be installed and loaded using the built-in R functions such as install.packages and library.

With this setup, the workflow for adding or modifying a package in WF-CRAN is:

  1. Create a branch in the Git repo containing the package
  2. Have the change code reviewed
  3. Merge to master and run mvn deploy

WF-Checkpoint

One of the main motivations for WF-CRAN was enabling verifiably reproducible research and analysis. The versioned repository structure of WF-CRAN enables this by allowing us to explicitly specify the version of the repository used in a script. We do this using an approach similar to that of Revolution Analytics’ Reproducible R Toolkit and checkpoint package. Revolution Analytics’ approach involves taking a daily snapshot of CRAN and making these snapshots available through their checkpoint-server. In a script, the checkpoint function is used to specify which daily snapshot of CRAN to use.

library(checkpoint)
checkpoint("2015-01-01")

The checkpoint package then analyzes the script to find all of the package dependencies and installs these to a local directory using the versions from the specified snapshot date. The script is then run using these packages.

At Wealthfront, we developed a similar package, WFCheckpoint, that takes a version number rather than a snapshot date as an argument and uses the packages for the specified repository version to run the script. For example, to run a script using packages from version 1.81 of WF-CRAN, the following can be added to the top of the script:

library(WFCheckpoint)
checkpoint("1.81")

The WFCheckpoint package, together with WF-CRAN, thus allows us to easily reproduce the results of any research or analysis.

How This Has Helped Us

As a data-driven and research-intensive company, our culture and success is built on producing high quality and reproducible research and analysis. WF-CRAN and WFCheckpoint have allowed us to bring our rigorous engineering practices to R, enabling more scalable backtests, sophisticated and interactive dashboards, and more unified development environment for research.

At Wealthfront, we strive to constantly improve the investment products and services we provide to our clients and WF-CRAN and WFCheckpoint are just a few examples of the tools we have developed at Wealthfront that enable us to do so.

Thursday, October 16, 2014

Security Notice on POODLE / CVE-2014-3513 / CVE-2014-3567

On October 14 a vulnerability in OpenSSL, named POODLE, was announced by Google. Two advisories, CVE-2014-3513 and CVE-2014-3567, describe a vulnerability in OpenSSL implementation of the SSLv3 protocol and another vulnerability that allows a MITM attacker to force protocol downgrade from secure TLS to vulnerable SSLv3.

In response to the POODLE vulnerability, Wealthfront disabled SSLv3 access our websites. For clients using SSLv3 to access our websites, we instead provide links to upgrade their browser.

Further Resources for POODLE Help

We recommend auditing all systems using OpenSSL and upgrading when vendor fixes are available. Here are some resources we found useful in our response to this disclosure:

As always, if you have any questions about the security of your Wealthfront account, contact us at support@wealthfront.com. We will continue to monitor this issue as the community and vendors investigate this vulnerability further.

Thursday, October 2, 2014

Touch ID for Wealthfront App

At Wealthfront, our clients count on us to provide them with delightful financial services built with leading technology. They have chosen to trust us with some of their most important financial needs and keeping their money and data secure is of the utmost importance to us. When we released the Wealthfront iOS App in February we required our clients to login to the app if it had been inactive for more than 15 minutes, causing many of them to enter their full password multiple times each day. We soon pushed out our PIN unlock feature to allow them to view their data in the app with a four digit PIN. When a client needs "privileged" access, for example scheduling a deposit, the app still requires their password. This way we can ensure security around sensitive events while providing greater convenience for everyday use.

As you can see from the graph below, within a week after our PIN unlock feature went live, more than 75% of clients were actively using it. To this day it remains a highly utilized feature.



One of the exciting new features Apple announced at this year's WWDC allows developers to use biometric-based authentication, or Touch ID, right within their apps. Touch ID is very secure because fingerprints are saved on the device in a security co-processor called the Secure Enclave. It handles all Touch ID operations and, in addition, it handles cryptographic operations used for keychain access. The Secure Enclave guarantees data integrity even if the kernel is compromised. For this reason, all device secrets and passcode secrets are stored in the Secure Enclave.

Touch ID makes authenticating with an application even easier than our PIN feature while providing additional layers of security. From the day it was announced we've wanted to use Touch ID to allow our clients to authenticate with the Wealthfront app. Apple provides two mechanisms for us to integrate Touch ID:
  1. Use Touch ID to access credentials stored in the keychain
  2. Use Touch ID to authenticate with the app directly (called Local Authentication)
We carefully built test apps to compare each of these two approaches and today we'll examine our thought process and how we chose which mechanism best suited our needs.

Decide which Touch ID mechanism to use

The following is a diagram adapted from WWDC 2014 Session 711 to compare the two authentication mechanisms:



The biggest differences between keychain access and local authentication are:
  • Keychain access
    • The Keychain is protected with the user's passcode, it is also protected with a unique secret built into each device only known to that device. If the keychain is removed from the device, it is not readable
    • The Keychain can be used to store a user’s full credentials (e.g. email and password) on device encrypted by the Secure Enclave that can be unlocked based on authentication policy evaluation:
      • If a device does not have a passcode set, the Secure Enclave is locked and there is no way to access any information stored in it
      • If a device has a passcode, the Secure Enclave can be unlocked by the passcode only
      • If a device has Touch ID as well, the preferred method is to authenticate with Touch ID and passcode is the backup mechanism
      • No other fallback mechanism is permitted and Apple does not allow customization of the fallback user interface
  • LocalAuthentication
    • Any application can directly call LocalAuthentication for Touch ID verification
    • No permission is granted to store secrets into or retrieve secrets from the Secure Enclave
    • Contrary to the keychain access case, Apple does not allow device passcode authentication as a backup
    • Every application needs to provide its own fallback to handle failed Touch ID case with custom UI
We had one major concern about storing sensitive information in the keychain, the only fallback for failing to authenticate with Touch ID is the device passcode. iOS users usually configure a four digit passcode, which we feel is less secure than their account password. Apple, for example, uses your iCloud account password as the fallback mechanism if you are trying to make a purchase on the iTunes store and fail to successfully authenticate with Touch ID. If we authenticate with Touch ID via LocalAuthentication, we can use our PIN unlock feature or the client's password as the fallback mechanism. We still don't store the password on the device, failure to authenticate with Touch ID requires full authentication with Wealthfront's servers if the device does not have a Wealthfront PIN configured. Furthermore, any "privileged" access still requires a password. We feel this represents the best compromise between security and convenience. Now lets take a closer look at how we implemented our integration with Touch ID.

Integrating Touch ID Through Local Authentication

Integrating Touch ID into an application is a two step process:
  1. We ask if the device supports Touch ID by calling -canEvaluatePolicy:error:
  2. We call -evaluatePolicy:localizedReason:reply: to display the Touch ID alert view, it will call our reply block
Let's take a closer look at how we use these methods in our Wealthfront application in Touch ID authentication.

Check if Touch ID is available

The follow code fragment is a simplified version from our production code.
  • Lines 5-7: To set things up, we create an instance of LAContext that will be used for Touch ID authentication.
  • Lines 9-26: We use the -canEvaluatePolicy:error: API to see if the device can use Touch ID. If we get a YES back, we know the device is capable of evaluating the LAPolicyDeviceOwnerAuthenticationWithBiometrics policy. We will invoke the second API method (below) to request a fingerprint match. If the return value is NO, we will check the error code and generate a new localError to send back to the caller. Instead of using the LAError domain, we generate our own WFTouchIDErrorDomain and error code (see below for reasons) to propagate error message back to the caller.
  • Line 28: Here we call a method in another class to check if the user has opted out of using Touch ID.
  • Lines 29-34: Again we use our own WFTouchIDErrorDomain and error code so the caller method can parse it to get the error message.

Authenticate with Touch ID

If the above check result is YES, we can now proceed to the second step by calling -evaluatePolicy:localizedReason:reply:.
  • Lines 4-6: Here we first confirm the fallbackButtonTitleString, successBlock, and fallbackBlock are not nil.
  • Lines 7-9: We create a new LAContext object if it is nil.
  • Line 11: We pass a fallbackButtonTitleString as the fallback button title. This one cannot be nil, passing nil causes an exception which could crash the app.
  • Line 12: The reasonString is also required because Touch ID operations require this string to tell the user why we are requesting their fingerprint.
  • Lines 13-26: We pass the reasonString, successBlock, and fallbackBlock to -evaluatePolicy:localizedReason:reply:. The replyBlock will be passed a BOOL indicating whether or not the authentication attempt was successful. If the reply is a YES we can now proceed with the successBlock. Otherwise, we pass the error to fallbackBlock so it can check the error code to find out the reason for failure and act accordingly.
We can only call -evaluatePolicy:localizedReason:reply: when the app is in foreground. As soon as we make the call, we will see the Touch ID alert view to prompt the user to scan their registered finger.


It is very important to use dispatch_async to dispatch the successBlock and fallbackBlock back to the main queue for UI update. Otherwise the app will freeze for a long time since the -evaluatePolicy:localizedReason:reply: appears to be using a XPC service (no document from Apple to say so, but we saw evidence of this in Instruments). The UI would be updated only after the XPC service gives back control to the main queue.

Customize LAError error code for iOS 7 devices

When we were working with the iOS8 beta, we also needed to maintain compatibility with iOS 7. The problem is that LAContext, LAError are not available in iOS7 since the LocalAuthentication framework is new in iOS8. We also wanted to give the users the option to opt out of Touch ID operations. Moreover, we rely heavily on automatic testing on both devices and simulators. LAContext gives an undocumented -1000 error code if we try to call the methods on a simulator. In order to cope with every possibility listed above, we made a custom NS_ENUM called WFTouchIDError and use the userInfo dictionary to describe any error. For example, if the user opts out of Touch ID, we use WFTouchIDOptOut so that the caller can behave accordingly. In our unit tests, we can also use the WFTouchIDNotAvailable and userInfo to tell the simulator not to fail on a test since Touch ID is not supported.

Testing

At Wealthfront, testing is in our blood; no code is shipped without test coverage. The following code snippet test -authenticateByTouchIDWithFallbackButtonTitle:success:fallback:

This test follows a similar testing paradigm as I described in more detail in a previous blog. Briefly, we use the OCMock framework to decompose code into testable components. By isolating the code from its outside dependencies, we are able to make sure the code behaves as we expect from the bottom up. In this test, we use OCMock's andDo: block to make sure only one of the successBlock and fallbackBlock blocks is called.
  • Lines 2-6: Here we mock out LAContext and set the mocked object mockLAContext as _authManager's localAuthContext. Then we call out LAContext's expected -setLocalizedFallbackTitle: method with the expected calling parameter:@"Wealthfront PIN" as it will be used to set the fallback button title.
  • Lines 8-15: We spell out another anticipated LAContext call with expected parameters. We make sure reply: needs to be valid by setting the expectation as [OCMArg isNotNil] so we can execute the void(^reply)(BOOL success, NSError *error) block inside the andDo: block.
  • Lines 17-25: We call -authenticateByTouchIDWithFallbackButtonTitle:success:fallback: with a simple successBlock in which the BOOL variable value is changed and we make sure the fallbackBlock is not called by inserting a XCTFail inside the fallbackBlock.
  • Lines 27-30: We make sure that there is an exception thrown if we try to set the fallbackButtonTitle to be nil.

Final Thoughts

At Wealthfront, our mission is to bring the best and latest technology to our clients and thereby improve their experience. In iOS 8 Apple has finally provided a public API for us to leverage Touch ID in our applications. This allows us to deliver greatly enhanced convenience to clients without sacrificing security. We carefully considered the implications of adopting the Touch ID mechanism. Direct interaction with LocalAuthentication gives our clients the best experience and greatest security. It was very important to support Touch ID as soon as possible to give our users a delightful experience. We leveraged our continuous integration infrastructure to both validate our integration as well as verify that Apple had fixed bugs we discovered during the beta. This allowed us to be ready to put a build in the App Store within a few hours of Apple releasing the iOS 8 GM build.

Friday, September 26, 2014

Security Notice on Shellshock / CVE-2014-6271 / CVE-2014-7169

On September 24 a vulnerability in Bash, named Shellshock, was publicly announced. The original Shellshock advisory, CVE-2014-6271, described a severe remotely-exploitable vulnerability in all versions of GNU Bash software. A follow-up advisory, CVE-2014-7169, was issued for an incomplete fix to CVE-2014-6271.

Security review of Wealthfront systems confirmed no client-facing components were vulnerable to Shellshock. The Wealthfront team deployed fixes for CVE-2014-6271 and CVE-2014-7169 on all internal hosts, consistent with security best practices.

Further Resources for Shellshock Help

We recommend auditing all systems using Bash and upgrading. Here are some resources we found useful in our response to this disclosure:

As always, if you have any questions about the security of your Wealthfront account, contact us at support@wealthfront.com. We will continue to monitor this issue as the community and vendors investigate this vulnerability further.

Friday, September 19, 2014

Small Files, Big Problem

Data drives everything we do here at Wealthfront, so ensuring it’s stored correctly and efficiently is of the utmost importance. To make sure we’re always as informed as possible we gather data from our online databases very frequently. However, this has the unfortunate side effect of creating a large amount of small files, which are inefficient to process in Hadoop.

The Small Files Problem


Typically, a separate map is created for each file in the Hadoop job. An excessive amount of files therefore creates a correspondingly excessive amount of mappers. Further, when there are many small files each occupying their own HDFS block an enormous amount of overhead in the namenode is incurred. The namenode tracks where all files are stored in the cluster and needs to be queried any time an application performs an action on a file. The smooth performance of the namenode is thus of critical  importance as it is a single point of failure for the cluster. To make matters worse, Hadoop is optimized for large files; small files cause many more seeks when reading.

This set of issues in Hadoop is collectively known as the small files problem. One good solution when pulling small files stored in S3 to a cluster is to use a tool such as S3DistCp, which can concatenate small files by means of a ‘group by’ operator before they are used in the Hadoop job. We, however, cannot use this tool for our data set. Our data is stored in avro files, which cannot be directly concatenated to one another. Combining avro files requires stripping the header, which requires logic that S3DistCp does not provide.

A Consolidated Files Solution


To solve this the small files problem, we periodically consolidate our avro files, merging their information into a single file that is much more efficient to process. For data that is taken hourly, we take the files for each hour of the day and merge them into a single file containing the data for the entire day. We can further merge these days into months. The monthly file contains the same data as the set of all the hourly files falling within its span, but it is contained in a single location instead of many. By switching from using the original hourly files to monthly ones, we can cut down the number of files by a factor of 720.



Hours combine to form a day, days combine to form a month*.
*Wealthfront is aware that there are more than 3 hours in a day and more than two days in a month; this is a simplified visualization

We must ensure that we do not take this too far however. Consolidating already large files can begin to reduce performance again. To this prevent this, the code only consolidates the files if the combined size does not exceed a specified threshold. This threshold is chosen based on the HDFS block size; there is no gain to be had from a file that already fills a block completely.

Selecting Files For Computation


This creates the new challenge of dealing with files that span different durations. In general, when requesting data across an interval of time we want to choose the fewest amount of files that will give us the desired dataset without any duplicates. Consider the following diagram representing a collection of files arranged chronologically. We wish to fetch only the data falling between the red lines, and use as few files as possible. 


  

Our approach is a greedy algorithm which takes files spanning the largest amount of time first, and considering progressively smaller intervals. In this case, we first consider the monthly intervals. We eliminate the first month because it includes data outside our requested timeframe.



We next consider the days. We first eliminate the day not fully in our timeframe. We also eliminate the days that overlap with our previously selected month.




Applying the same action to hours gives us our final choice of files to use.




Note that the entire interval is covered, there is no duplication of data, and a minimum number of files are used. We fetch the same data that would have been retrieved by taking each hourly file in our interval, but it arrives in a format that is much more fit to process in Hadoop.

Handling Outdated Files 


The danger in creating consolidated files lies in the fact that the derived file will become outdated if the source files beneath it are changed. We protect ourselves against this risk by validating consolidated files when they are requested for use. If there is a file spanning a smaller interval time that was updated more recently than the file meant to encapsulate it the underlying data has changed since the consolidated file was created. We ignore the outdated file and go down to the next lower level of smaller files. We also log an error noting the consolidated file is outdated so it may be recreated and made fresh again.

Results


We find that this consolidation has an enormous impact on performance. Our first test case was a job that previously operated on a few years worth of data, bucketed into files by hour. When attempting to use this multitude of small files, the cluster would fail after more than 2 hours when it ran out of memory. With the consolidated files, the same cluster successfully completed the job in 1 hour 15 minutes.



This strategy comes with the tradeoff of significantly increased disk space usage in our cloud storage system as we store a full copy of our data for each level of consolidation used. In our case, this is a very small penalty compared to the enormous gains in performance.

Our infrastructure is extremely important to us at Wealthfront. We are constantly working to ensure that our systems can support the rapid influx of new clients. File consolidation is just one of the many ways we keep our infrastructure as efficient and robust as possible. 

Thursday, September 18, 2014

The Unanticipated Intern Experience

Two hours after I walked through the front door at Wealthfront, I pushed code to production. Two weeks after that I took part in conference calls to outside business partners. Two weeks after that I planned a critical feature with product managers. Two weeks after that I debated UI elements with the lead designers at Wealthfront. Two weeks after that I wrote analytics code for the new features. It's more than I bargained for as a software engineer intern, and more than most would expect even as a full time engineer in Silicon Valley. But at Wealthfront it happens by design. Flat teams commissioned to self-organize as they see fit pull interns along simultaneously in the directions of engineering fundamentals, client-centric design and strategic business plans.

But as challenging and eye opening as it's been to sweep through the process of planning and designing a feature, that's only half the story of my time here. I worked as an engineer, after all, and perhaps the most memorable and valuable experience was the responsibility for prototyping, architecting, and building product-critical features. Sure, plenty of companies let interns take charge of projects and some companies let interns get their hands on critical products. Some even let interns build projects that may one day affect customers.

What sets Wealthfront apart is a willingness to give new employees full responsibility for projects that will immediately affect customers within days, if not hours. I spent much of the summer working on a feature to save clients time and effort as they set up their account. That's an obvious win for clients, and an equally obvious asset for Wealthfront. More importantly, the changes are a key enabler of future, even larger improvements to the whole client experience. Just as I was not siloed into a narrow role as a developer, I was also not siloed into a narrowly useful project.

Maximize your importance
Before thinking about specializing, every intern or full-time employee coming out of school is going to have to grapple with the gap between their experience and the new scale, breadth, and pace of a real software company. There are both technical and operational differences between how we learn to work in school and how employees work in Silicon Valley software companies.

It was immediately obvious that there were parts of the technology stack I was unfamiliar with and a couple programming styles I hadn't seen before. But interestingly enough, I found the technical gap easy to bridge. Reading through the codebase and asking coworkers a handful of questions was more than sufficient to fill in the gaps, probably because I already had a mental framework for understanding software. The difficult part was adjusting to the fact that, for the first time, it took more to manage a project than an assignment handout, a group text thread and a git repo. Knowing your own code and understanding a variety of job titles isn't enough, it takes observation and effort to understand how to integrate in and work with a highly horizontally mobile team.

Developing that framework is one of the largest benefits I did not expect to gain. It will pay dividends in my ability to evaluate companies, onboard onto new teams and contribute to their work processes.

The more exciting difference between school and real world software projects is simpler: The stakes are higher. Instead of working for a letter grade and having one or two users, there's more than a billion dollars and tens of thousands of customers who depend on your code. Obviously, that changes both your mindset and your workflow. Not only is this an important lesson to learn in an environment surrounded by mentors and extensive testing, it's also satisfyingly meaningful. For those of us fresh out of the classroom, finding a place where our work genuinely matters will affect our mindset and productivity much more than any technology or workflow.

Learn faster, learn smarter
While the potential meaningfulness of your work may not always be feasible to evaluate as a prospective intern or employee, there are a couple factors that are both visible to interviewees and fundamental for a new employee’s learning.

The single most important driver of my technical development this summer was feedback from both code reviews and test results. Maximizing learning, then, necessitates maximizing my exposure to feedback. Short of demanding more feedback (which has obvious drawbacks), the most practical way of doing this is maximizing the speed of the feedback loop for my work. I have worked in tech companies with development cycles ranging in length from 6 weeks to, thanks to Wealthfront, about 6 minutes. Often faster, since robust test suites at every level give reliable feedback for code correctness within seconds. Access to team-wide code review and deployment within minutes is a fast track for not only code, but also skill development.

Students looking to intern often wisely look for an internship where they believe they’ll work under and learn from the best people they can. What we don’t often realize is that the amount you learn from leaders is not just a function of the quality of the leaders, but also of the transparency and communication between you and those you work for. Employees always know what decisions are made. In good organizations, they know why these decisions are made. At Wealthfront, I know how they are made. Data-driven culture and its child principle of data democratization certainly make this easier, but there’s also a human aspect to this culture. I speak daily with a mentor and more than weekly with our VP of Engineering. Sometimes we talk about a javascript implementation and sometimes about types of stock and funding rounds.

Structure speaks louder than words
It’s hard, especially as a prospective intern, to determine whether a company will offer you the kind of learning opportunity you seek. I do now recognize, though, that the potential for learning as an intern at Wealthfront didn’t come from a proclaimed focus on interns but instead is deliberate residue of the larger design of how Wealthfront engineering works. The flat and integrated team structure enabled the breadth and pace of my experience. The robust test structure and lack of hierarchy enabled the level of responsibility and ownership other interns and I had. The unrelenting focus on automated testing and continuous deployment enabled the feedback loop. These characteristics are the result of both intention and expertise, and the opportunities I had could not occur without them.

The knowledge of how to recognize these characteristics in future companies might just be the most valuable lesson I’ve learned at Wealthfront.