Link to wealthfront.com

Fork me on GitHub

Monday, July 7, 2014

My Internship with Wealthfront

I’m wrapping up my first couple of weeks at Wealthfront, and I’m very confident to say that I made the right decision for my last internship before graduation. On my first week, I was told “Now you know what it’s like to work at a hypergrowth startup company”. I couldn’t agree more with the sentiment, it has been an amazing time to be here, whether it was preparing for our recent announcement of reaching $1 billion in AUM to digging into research projects that might just impact how we reach our next $1 billion. Let me try and start from the beginning and describe my start as a Wealthfront quantitative research intern.


Working environment


The anticipation of joining a vibrant start-up began a month before my term with the introduction emails. Interns from across the nation gave their backgrounds and hobbies. I had never seen such a major energetic cohort with interests ranging from extreme hiking to music production and to ballroom dancing. On my first day, I was pleasantly surprised by the sophistication and breadth of the data and technology, coupled with documentations that are crowd-sourced and well-maintained. In addition, every intern was assigned a mentor to assist and supervise. Thanks to all the help that Wealthfront provides, I spent little time setting up and immediately jumped in on quantitative analysis using R. R provides a wide variety of statistical and graphical techniques and is the favorite programming tool for most statisticians. I was delighted to find that the Wealthfront research team has adopted R as the primary programming language for data visualization, data analysis and data modeling.

While my doctoral dissertation focuses heavily on R analysis, the pace and style of development here is on an entirely different level. Wealthfront is dedicated to being a data-driven company. Strategic decisions are made based on insights generated from data analysis and research. This is reflected in the company’s heavy emphasis on building an efficient and secured data platform, and maintaining and improving data quality. From the analysis side, all code is organized in functions and then R packages. Computationally extensive models are carefully designed and written in ReferenceClass. Rather than writing ad-hoc code, we write structured scripts to ensure our analysis is reproducible. And finally, everything is tested to ensure accuracy. All R functions and classes are coupled with tests.


First project 


My first task was to analyze the spread of the Exchange Traded Funds (ETFs) Wealthfront invests in. Our investment team is led by the renowned economist Dr. Burton Malkiel, who famously stated in his bestselling book Random Walk Down Wall Street (soon to be released in its 11th edition) that for a long term investment horizon, the optimal strategy is to invest in a portfolio diversified across relatively uncorrelated asset classes, customized for individual risk tolerance. Following his investment philosophy, we invest in a widely diversified portfolio consisting of 11 asset classes. Each asset class is represented by a low-cost, passive ETF (investment whitepaper). Wealthfront simply charges a management fee of  0.25% for accounts over $10,000, and charges no trading commissions. My first project was to calculate and visualize our daily average buy and sell prices of our 2014 ETF trades. The difference between the buy and sell prices should reflect the ETFs’ spread and our execution quality. Wealthfront buys and sells ETFs at different time points during a day. What I analyzed was the “effective” spread that we paid or collected on average in our trades. As a start, I studied the spread for two major ETFs that we invest in: VTI and MUB. VTI is chosen to represent the asset class of US stocks; and MUB represents the asset class of  municipal bonds. 


The first step is to load real-time trading data into R. Our data is stored in a unified cloud data warehouse (DWH), which has fast query performance and handles large-scale data. Perhaps not surprisingly (since we are a technology company), our day-to-day operations generate large amount of data. Our data engineers do a very good job delivering and maintaining high quality data. Thanks to their work, there is a large series of well-formatted and well-cleaned data tables available for various research purposes. To make things even better, Wealthfront has developed its own R function to load data directly from the DWH by calling a simple string of SQL queries. For the spread analysis, all I had to do was the following: 


where 8897 is the instrument ID of VTI. The output of function
ExecuteSqlQuery is a data frame. In this study, data is a data frame with columns date, time, quantity, price, fee and action for each trade that we’ve made since the beginning of this year, ordered chronologically by the time trades occurred.    

With the data frame ready, for each ETF at each trading day, I calculated:

                              
where  represents the number of sales made during that day. Similarly for purchases, I calculated: 

where  represents the number of purchases made on each specific day. And the corresponding daily spread for this particular ETF is calculated as: 

                              
In R, this can be quickly done by just a few lines of code. Suppose we need to calculate the effective spread on day
i

I wrote a function to iterate through all the trading days for this year and the final result is a vector of spreads.


Lastly, all code comes with tests. To test my function, named as
CalcAvg, I wrote the following test:

Results for VTI and MUB are presented below. In the first two graphs, the upper plot shows daily time series for Average Sell Price (dashed line) and Average Buy Price (solid line); the lower plot shows daily time series for Spread. The graphs are written in R using package
ggplot2:

The third and last graph provides a quick comparison of average spread between VTI and MUB since January 1, 2014. We can see that in general, VTI has a much tighter spread than MUB; and the average spread for both ETFs are positive. This plot is also made in R with
ggplot2:







Final thoughts 


I come from an academic environment, where our research may not always be as applicable to real-world situations. Before arriving, I listened to academic heavyweights such as Dr. Burton Malkiel vouching for Wealthfront’s philosophy and I was delighted in the first week to find articles from scientific journals as relevant readings. After I finished my first project, I moved on to use various statistical research methods involving stepwise regression, LASSO regression, random forests classification/regression and other machine learning methods to provide in-depth insights about Wealthfront’s clients. It’s exciting to bring the depth of fundamental research together with the rubber-meets-the-road approach favored by Silicon Valley firms such as Wealthfront. 


While it’s the work and research that most interests me, Wealthfront has practiced a rapid iteration approach and expanded my capabilities. From participating in team softball games to long-term strategy meetings, in the short time I've been here I've already had a wide range of experiences that have enhanced my skill-set and allow me to view things from a much different perspective. My first few weeks have been fantastic. I'm excited to take the latest statistical theory from academia and apply it to real-world problem-solving that will benefit Wealthfront's clients. 


Tuesday, June 24, 2014

Joining Wealthfront as a DevOps New Grad

I started at Wealthfront two weeks ago, and I can already say with little doubt that it was the best opportunity for me. Having taken Andy’s career advice, I was determined to start my career in the Bay Area and wanted to join a mid-sized company with momentum. I had followed the fintech space for a while, and Wealthfront’s mission of democratizing sophisticated financial advice for the masses really resonated with me.

First impressions

I got my first glimpse of the people and culture driving Wealthfront’s success during my interviews. I could tell very early on that Wealthfront was a heavily engineering-driven company with some of the best talent in the industry. Software engineers had a strong knowledge of the financial markets, and over 90% of employees actively wrote code. This extended all the way to directors and VPs. When it came to creating a strong engineering culture, Wealthfront really walked the walk. The interview process was a fun and unique experience for me, and I joined without hesitation.

Start

At Wealthfront, engineers hit the ground running. The presence of simple documentation and great mentors provides new hires with a rapid ramp-up that is neither stressful nor intimidating. We spend a lot of effort on documentation so that the learning process is successively easier for new hires. On my first day I was already pushing code to production, and within the first week I had fixed bugs, updated server configurations, and most importantly, written tests for all my commits.

The emphasis on testing at Wealthfront was obvious from the first day. Clients have trusted us with over $1 billion of their money, and with that trust comes an incredible level of responsibility. Everything is tested, even the simplest change to the code base. There are no QAs or SETs. Engineers are responsible for developing and maintaining quality software that meets the highest standards. The entire team goes to great lengths to make sure that clients’ trust in Wealthfront is rightly placed.

With automation in mind, we have software and systems that make continuous deployment simple and painless. This allows our software to be developed to high standards and deployed quickly to production, resulting in our ability to rapidly and reliably push enhancements and bug fixes to customers. The combination of a strong infrastructure and test-driven culture is a major reason why Wealthfront is leading the way in this space. Everyone here really is dedicated to building and maintaining Wealthfront as an engineering-driven company.

DevOps

One of my first tasks as a DevOps engineer was to fix an issue that was causing our backup monitor to fail. Backup Monitor is a service that maintains the state information of all backups. After diving into the issue, I found that the problem was an uncaught exception when requesting configured backups. The root cause of the problem was a short loss of network connectivity to the Chef server. Here is the original code:


I explored several different approaches:
  1. Ignore the issue and continue execution.
  2. Throw the exception back to the caller and have it implement the exception handling logic.
  3. Return an empty hash.
  4. Retry x number of times at expanding intervals until reaching a specified retry limit.
  5. Cache the configured backups and return the cache.
Each approach has tradeoffs. Ultimately, I went with (3).

The three criteria used to evaluate different approaches were (a) responsiveness to client, (b) client satisfaction with response, and (c) ease of implementation. Approach (1), ignoring the issue means that the backup monitor will crash again with an exception for a known issue. Approach (2), throwing the exception back to the caller does not result in the caller/client being very happy to have to handle the exception, and does not deal with the problem closest to the point of origin. Approach (4), retrying at expanding intervals will work, but may not return a result to the client for a long time. Failing fast is a more desired solution in this specific case. Approach (5), caching the configured backups and returning the cached version, is the best approach that ensures client’s satisfaction (the client receives the list of configured backups even when the chef server is down). However, implementing a proper key-value store like Redis for caching is a major undertaking that was out of scope for this exercise. I thought returning an empty hash satisfied the three criteria the best:


Lastly, writing tests is a critical part of the process of making changes to the code base. I wrote simple RSpec tests to test both success and failure cases:


Final Thoughts

My first weeks at Wealthfront have been great. Going forward, I’m really excited to dive even deeper into the stack. I am motivated and inspired by the opportunity to take on tough technical challenges and build valuable financial products. Providing high-quality advice and investment management at low cost is a difficult and noble mission, and I’m proud to be a part of the team making it happen.

Friday, June 6, 2014

Security Notice on CCS / CVE-2014-0224

On June 5 another vulnerability in OpenSSL, ChangeCipherSpec (CCS) Injection Vulnerability, was announced. Released as CVE-2014-0224, the advisory warns that nearly all versions of OpenSSL are vulnerable to man-in-the-middle (MITM) attacks.

After learning about the CVE-2014-0224 vulnerability, the Wealthfront team immediately deployed an updated OpenSSL library on all customer-facing servers.

Further Resources for ChangeCipherSpec Help

We recommend auditing all OpenSSL systems and upgrading all systems using OpenSSL library versions. Here are some resources we found useful in our response to this disclosure:

As always, if you have any questions about the security of your Wealthfront account, contact us at support@wealthfront.com. We will continue to monitor this issue as the community and vendors investigate this vulnerability further.

Thursday, May 1, 2014

My First Weeks at Wealthfront

I am rounding out my first couple weeks as a new employee at Wealthfront and at this point have full confidence that I chose the right opportunity for my first full time position after graduation. Aside from product, overall engineering mindset and culture were my top priorities and Wealthfront has exceeded my expectations on both fronts.

System Architecture

While this is my first full time position, after seven internships at different tech companies I am no stranger to the ramp up process and gaining familiarity with a code base. I have been delighted by just how easy it is to find where code lives as a byproduct of the overall system architecture. The system firstly follows a service oriented architecture (SOA) with a logical separation of services that are each hosted on different machines. These services handle logical slices such as trading logic, reporting, or serving the front end. Each of the services are then broken down into queries that encase a segmented logical action. Examples include, sending a password reset email, creating a trade request, or processing a deposit. There is no monolithic app, no complex all-in-one functions, and essentially no hassle.

I would certainly not claim an intimate understanding of all of the systems, but in a matter of a few days I gained a level of comfort to navigate the code and figure out just where changes should go. The code also remains comparably clean as work is broken into relatively small slices and thus monolithic complex blocks cannot develop over time. I am able to look at a bug or feature request and quickly decipher where exactly changes need to be made by working down the logical tree of code housing. Along with that, the small segments mean that examples of similar code are easy to find and use as a template for the changes I will be making.

Test Coverage

Identifying where code lives is great on its own, but the ability to make changes to a system I haven't worked in before without breaking existing logic is the immediate (and likely more important) next concern. My very first change (if you would classify something so small as such) was to add my name to a list of authors in the backend system. This was a trivial enough process of adding myself to an enum so you could imagine my surprise when the build came back red after my commit. My immediate thoughts were that my environment wasn't configured properly or that I had run across a flaky test, but the build was operating correctly. The reason it had come back with failures was that I had not updated the test checking for the number of current authors and it was now failing when it came across the new unexpected entry. It was at this point I gained a full understanding of everything I had been hearing about Wealthfront's commitment to test coverage.

Moving on to my first real projects this extensive test coverage has come as a blessing. In one of my early projects I had to move the entirety of our email service over to a new sender I had created for upgrades. That is to say every single email and response was going through my new logic and at no point in time did I have anything but utter confidence that everything was going to work simply because the tests told me that it was. If I broke something it came out in development cycles rather than code review or deployment. This means I can focus on building and testing the new logic I am working on rather than checking and rechecking that I didn't miss some caveat in a function call I am using.

Deployment

Once the changes have been made, tested, reviewed, and shipped the daunting moment of deploying your first production code comes into play. This process normally involves testing code in a staging environment, determining what machines need the changes, removing those machines from the pool, performing the release, and then continually refreshing logs to make sure you didn't just blow up everything. If the theme isn't already becoming clear I'll add a spoiler that this turned out to be as equally stress free as the previous steps. Below is an image of the continuous deployment manager we use here at Wealthfront.


In order to deploy a service you need to confirm that the automatic build upon merge is successful and then click on the shield next to the service you want to release. That's it. If something does go wrong and error rates spike the deploy will automatically rollback and your code is removed from production. As with seemingly everything here the tool does what it is supposed to and does it very well.

Final Thoughts

Ramp up has been overall fairly painless and it is exciting to be shipping so much code in such a short period of time. Clearly there is always more to learn and I am excited to continue diving in to build a truly awesome product.

Tuesday, April 29, 2014

Marketside chats #5: Market making

This article will focus on the role of a market maker (MM) in the securities (financial instruments) markets. Let us start by talking about some other well-known markets.

Role of intermediaries

Some markets have no intermediaries:

  • open-air farmers' markets involve fruit & vegetable sellers selling directly to the buyers.

Some markets have intermediaries, but those intermediaries only act as brokers/agents who bring together already existing buyers and sellers for a fee:

  • real estate agents are not in the business of buying houses in the expectation that they will resell it to another buyer for a profit (although they sometimes do).
  • used book sellers on the Amazon marketplace (TM) pay Amazon a fee for being able to post an offer price for books they own and want to sell. Amazon acts as agent, and does not take any risks, such as pricing risk (what if an expensive college textbook stops getting used in colleges, or has a new edition?), or liquidity risk (what if no buyer can be found for a book for an entire year?).

Some markets have intermediaries who either act as brokers/agents or principals/dealers (or sometimes both, though not for the same transaction). A principal/dealer buys with the expectation to sell later, but possibly incurring risk in the meantime:

  • Car dealers often will buy a used car that is traded in. They mostly do it to profit from the (typically) new/newer and more expensive car they will sell to the person who brings the trade-in. However, they aim to buy the traded-in car at a low price so they can profit from marking it up to its next buyer.
  • Some real estate investors - such as the ones who send you those "we buy houses" postcards in the mail - act as principals by buying houses cheaply when they can, and selling them for a higher price. They usually buy and improve a house without first having another buyer lined up to sell to.
It is difficult to find an exact real-life analog to securities MMs, as one can make an argument that the intermediaries change/improve the product in each case above. For instance, car dealers may add extra warranties and/or the benefit (or illusion of benefit) that they performed safety checks. Some real estate "flippers" fix up homes with the expectation that their repairs increase the house price by more than the cost of the repair.

Securities markets participants can roughly be divided in this way:
  1. Those with an opinion/bet on the price of a security: e.g. investors (retail traders, pension funds, etc.), hedgers (e.g. a farmer who wants to insure against price volatility of his produce), or speculators who want to bet on/against a security.
  2. Those who have no exposure to prices of securities: e.g. agents/brokers. Note that only exchange members (i.e. not you or I) can trade directly on the stock exchanges, so a broker has to do it on our behalf.
  3. Those who acquire exposures to prices, but typically don't want to: e.g. principals/dealers (the subject of this article).

Cost/benefit of being a market maker

A MM is a dealer who has the right (and, usually, the obligation) to make a two-sided market by posting a bid and an offer (i.e. an order to buy and an order to sell) on a security.

Typically, there are rights and obligations to being an MM. In US stock markets, the biggest benefit is the ability to sell short (i.e. sell a stock without owning it) with few restrictions. The disadvantages are usually some cost (the exchanges often make MMs pay for the privilege) and the obligation always to make a market on a stock, although sometimes the obligation is undemanding enough not to matter.

Depending on how much a MM is needed to maintain a market, exchanges may tweak the rights/obligations balance. In options markets, it is difficult to have enough orders for a security, as there are many combinations of expiration dates and strike prices per stock. Therefore, some options markets (CBOE, PHLX) give extra benefits to a MM as an incentive, the most important being "allocations": MMs get to trade before, or with a higher quantity than, non-MM firms at the same price level (although retail customers typically get preference over both).

In practice, it is often possible to have a one-sided market (or no market) by entering stub quotes to comply with MM requirements. For example, if GE is trading $25.10 by $25.11, then if a MM publishes a quote of:
  1. $25.10 by $1000, the MM is essentially only buying that stock (or, alternatively, would love to sell at $1000 if anyone shows up at that price).
  2. $0.01 by $1000, the MM may be complying with the letter of its requirements, but effectively declaring no intention to trade.

What does a MM do?

Very roughly speaking, a MM makes a market on multiple securities at a time, and tries to keep its risk neutral (hedged) by making sure it does not have any big individual bet. It may hedge either
  • actively/explicitly: e.g. if at the close of a day a MM is holding $10m of US stocks, it may try to sell $10m worth of S&P 500 futures - or some smaller amount, subject to the risk they are willing to take, because hedging has transaction costs. (*1)
  • passively/implicitly: e.g. if a MM has bought a lot of GE over the course of the day, it may shade/lean its quotes to cause it to sell more than it buys. E.g. if PZZA is trading $50.10 by $50.40, and MM has bought a lot of PZZA, it may have a quote of $50 by $50.40 (so buying at less than the bid price, but selling at the ask price), or post a larger quantity to sell than to buy. Note that this simple example assumes PZZA and GE will both move in the same direction, so it is a gross simplification of a general model where correlations between stocks are utilized.
Buying low and selling high sounds like a slam dunk, but it is not. Aside from the obvious business costs (salaries, offices, computers, compliance, etc.), the biggest enemy is adverse selection. See Marketside chats #1 for more. In short, this is the risk that a MM buys as the price is dropping, or sells as the price is increasing, both of which cause losses. If this had 50-50 chance of going either way, it wouldn't be worth mentioning, but one is more likely to buy when the price is dropping than when the price is increasing, for various reasons.

Example: my quote is $25.10 by $25.11. I buy at $25.10 when a seller dumps a lot of stock, and then the price drops - the new market is $25.08 by $25.09. Even if I sell at $25.09, I have sold lower than I bought, despite the fact that at every point in time my bid price is lower than the sell price.

Risks that can be hedged

Being properly hedged has obvious risk benefits: it reduces the standard deviation of returns, which is desirable. Less obviously, hedging can also improve returns themselves, as it gives a MM more capital to allocate on doing more market making. The simplest example is being able to quote a larger size (i.e. tell the world it is willing to buy/sell a bigger amount).

A MM cannot be completely hedged, because hedging costs money, either in the form of
  1. trading costs (for active hedging). Simplest example: if a MM buys at $25.10 while the market is $25.10 - $25.11, it could hedge by selling at $25.10 right away, but then it would lose money net of fees and costs. 
  2. missed opportunity to market-make more (for passive hedging). After it buys a stock, the MM would stay more hedged if it decided to stop buying more of it, and only have sell orders out. However, it would also miss out on the benefit of buying more on the bid price.
Depending on hedging cost and hedging effectiveness, a MM may decide to hedge more or less. 
  • Example #1: if an MM is holding a lot of banks, it may hedge by selling XLF, which is a financial sector ETF. It also includes insurance companies, so not 100% correlated, but it is cheap to trade.
  • Example #2: if a MM is holding a lot of stocks of companies that make their profits outside the US, it may decide to hedge by selling the US dollar, but there is otherwise no security with similar risks that is cheap to trade. It may hedge by selling a similar amount of S&P 500 futures, which are cheap to trade, although not as correlated with the "makes profits abroad" stocks.
Overnight risk is also an issue. US stocks trade from 9:30 AM to 4 PM EST; it is possible to trade during a larger (but not 24-hour) window, but it is much more expensive. Typically, a MM will avoid a large single-stock exposure, but even more so overnight, in the event stock-specific news appears.

In practice, most stock MMs stay hedged by leaning their quotes to buy/sell more, rather than actively hedging. Overnight risk is the exception, as stock futures are cheap to trade, even shortly after the 4 PM stock market close.

Risks that cannot be hedged with securities

There are at least two categories of risk that cannot be hedged in a simple way.

Stuck quotes: a MM may be unable to change its quotes due to some technology problem. If an MM has buy and sell orders on e.g. all S&P 500 stocks, and its network goes down, and there are no other backup mechanisms on autopilot on the exchange side (there usually are), then it would be possible to execute all the buys but none of the sells, if the market drops. When the MM fixes the problem, it may have found out that it accidentally has bought a sizable quantity of every stock it makes markets in.

Other system errors: Without getting into the details, on August 1, 2012, Knight Capital lost $460 million in 45 minutes. Although this is an extreme example, there are many classes of error that can result in losses. It takes a lot of fractions of a penny to make up for them.


(*1) A simple improvement is to use beta-adjusted numbers. For instance, if the $10m portfolio consists of stocks that on average move 1.5x as much as the stock futures a MM is hedging with, the MM will be better hedged by selling $15m of stock futures.

Thursday, April 17, 2014

Automating JavaScript Code Quality Checks

At Wealthfront, we're big advocates for automation. In general, automation saves time and ensures consistency.

One of the things we want to ensure is the quality of our JavaScript code. This is particularly important for JavaScript, given its weirdness. While this task can't be fully automated, there's some low hanging fruit available by automating linting and style checking.

Some tools for this purpose are the Google Closure Compiler, Google Closure Linter, JSLint, JSHint, JSCS, and ESLint. We decided to use JSHint for detecting potential bugs, and JSCS for automated style checking. While we're actually using Closure for compiling our JS, it wasn't flexible enough for us to use with our current codebase. ESLint looks like it'll be really good, perhaps better than JSHint plus JSCS, but since it's still new, we decided against it.

What do we do with JSHint and JSCS?

- JSHint checks for potential errors, such as using an undefined variable or forgetting a break statement in a switch block.

- JSCS enforces a common style, such as enforcing camelCase variables, always using semicolons, or requiring lines no more than 120 characters in length.

- Both checks run whenever someone checks code into a side branch, and before deploying. Engineers also are able to run the checks locally.

What won't we do with this setup?

- An automated system won't check that code is well designed (DRY, MVC, etc, although ESLint is does have DRY rule tests on their roadmap). For that, you should hire well, emphasize learning, share best practices, and hold code reviews.

- It won't check as thoroughly as the Google Closure Compiler. Since the Closure tools compile your code, they're able to analyze more thoroughly. Closure can detect more potential bugs, like functions called with incorrect parameters, or unused or unreachable code that probably indicates a bug. It can also apply JSDoc comments for additional benefits, like type checking and deprecated code. (Our JSCS validates JSDoc too, but it isn't nearly as powerful as Closure, and we rarely use JSDoc.) One disadvantage of the Google Closure Compiler is that it is less configurable than other tools. We actually compile our JavaScript with the Google Closure Compiler, and might use its code quality warnings in the future, but decided on JSHint plus JSCS for now.

If either JSHint or JSCS detects a problem, our build emails us with an explanation. For instance, while refactoring some inline JavaScript to place it in its own file, there was a variable defined in evaluated Ruby:

var shouldShowInstructionsParams = ["<%= @deposit_type %>", "<%= @deposit_amount %>"];

In the process of refactoring, the variable was renamed to a property of our w.inlineVars object, so that it wouldn't be a global variable.

w.inlineVars.shouldShowInstructionsParams = ["<%= @deposit_type %>", "<%= @deposit_amount %>"];

Unfortunately, the variable didn't get renamed everywhere it was used in the JavaScript files. This is the sort of thing that your unit tests would catch, assuming the unit tests cover it. Even without a test, though, the linter caught the bug:

    Checking style with JSHint...
    app/assets/javascripts/pages/transactions.js: line 14, col 7,     'shouldShowInstructionsParams' is not defined. (W117)
    1 error

Once alerted, it's easy to recognize the error and change shouldShowInstructionsParams to w.inlineVars.shouldShowInstructionsParams, fixing the bug.

One of the reasons we decided on JSHint and JSCS over Closure is flexibility. We're able to configure the tools to match the rules we want to enforce. For instance, here's our .jshintrc file:

{
  "bitwise": true,
  "browser": true,
  "camelcase": true,
  "curly": true,
  "eqeqeq": true,
  "immed": true,
  "jquery": true,
  "latedef": true,
  "loopfunc": true,
  "maxdepth": 5,
  "maxlen": 120,
  "maxparams": 6,
  "multistr": true,
  "newcap": true,
  "nonstandard": true,
  "sub": true,
  "undef": true,
  "unused": true,
  "globals": {
    "_": false,
    "d3": false,
    "ActiveXObject": false
  }
}

This describes how we've configured our JSHint rules. For instance, eqeqeq means we require triple equal signs over double equal signs. You can find the full list of options here: http://www.jshint.com/docs/options/

If you're looking to improve the quality of your company's JavaScript, we recommend adding automated quality checks to your build process. It's also useful for open source projects, because you'll get lots of pull requests that don't look like the rest of the project's code. Being able to say "PRs need to pass JSHint/JSCS" is a simple way to enforce consistency as well as find potential bugs.

Wednesday, April 9, 2014

Security Notice on Heartbleed / CVE-2014-0160

The Internet community learned on April 7 about the OpenSSL vulnerability CVE-2014-0160, known colloquially as Heartbleed. Many security professionals remember similar vulnerabilities in SSH, BIND, and Sendmail that pried open large chunks of the Internet Infrastructure. Heartbleed is a similar type of vulnerability, as detailed on the Heartbleed website.

We join financial institutions across the Internet in responding to this critical vulnerability and in response conducted a full security review. After this security review we confirmed that no client-facing Wealthfront systems were vulnerable to Heartbleed, as no systems are running vulnerable versions of OpenSSL.

Further Resources for Heartbleed Help

Everyone deploying production services on the Internet is working to mitigate the effects of this vulnerability. We recommend auditing all OpenSSL systems and upgrading all systems using OpenSSL library versions 1.0.1 through 1.0.1f. Here is a quick roundup of resources we found useful in our response to this disclosure:

As always, if you have any questions about the security of your Wealthfront account, contact us at support@wealthfront.com. We will continue to monitor this issue as the community and vendors investigate this vulnerability further.