Today we're announcing the next version of Myna. This brings a lot of improvements, some of the highlights being:
you can associate arbitrary JSON data with an experiment. You could use this, for example, to store text or styling information for your web page. This allows you to change an experiment from the dashboard and have the changes appear on your site without redeploying code;
Myna is much more flexible in accepting rewards and views. This enables experiments that involve online and offline components, such as mobile applications;
we have a completely new dashboard, which is faster and easier to use than its predecessor.
If you want to get started right away, login to Myna and click the "v2 Beta" button on your dashboard. This will take you to the new dashboard, where you can create and edit experiments. Then take a look at our new API, part of an all new help site.
Alternatively, read on for more details.
The New API
The changes start with our new API. The whole model of interaction with the API has changed. The old model was to ask Myna for a single suggestion, and send a single reward back to the server. There were numerous problems with this:
- Latency. It took two round trips to use Myna (one to download the client from our CDN, one to get a suggestion from our servers).
- Rigidity. Myna entirely controlled which suggestions were made, and only these suggestions could be rewarded.
- Offline use. Myna's model didn't allow offline use, essential for mobile applications.
The new API solves all these issues.
Instead of asking Myna for a suggestion, clients download experiment information that contains weights for each variant. These weights are Myna's best estimate for the proportion in which variants should be suggested, but clients are free to display any variant they wish. The client can store this information to use offline or to make batches of suggestions.
Views and rewards can be sent to Myna individually or in batches, and there are very few restrictions on what can be sent. If you want to send multiple rewards for a single view, that can be done. There are no restrictions on the delay between views and rewards, so those of you with long funnels can use Myna.
Since you don't have to contact Myna's servers to get a suggestion, all data can be stored in a CDN. This means only a single round-trip, to a fast CDN, to use Myna.
These features combine to make Myna faster for existing uses on websites, and also to allow new uses, such as mobile applications that work offline.
Another major change is to give you more control over experiments from your dashboard. To this end you can associates arbitrary JSON data with your experiments. You can use this data to set, say, text or style information in your experiments. Then any changes you make on your dashboard, including adding new variants, will be automatically reflected in your experiments without deploying new code.
We have also improved the deployment process. Instead of pulling experiments into a page one-by-one, we provide a single CDN-hosted file that contains all your active experiments and the Myna for HTML client.
Finally, we've updated the algorithm Myna uses. It behaves in a more intuitive fashion without sacrificing performance.
The new API is live and is being used in production right now.
The old dashboard wasn't up to scratch. It was difficult to use and wasn't able to support the new features we're adding to the API. As a result we've created a completely new dashboard. Click the "v2 Beta" tab to access it.
The dashboard is still in development, so there are some rough edges. However it's usable enough that we're releasing it now.
Possibly the most exciting new feature is the inspector, which allows you to preview your experiments in the page. Here's a demo. To enable the inspector, just add
There is still a lot of work to do. In addition to finishing the dashboard and documentation we are working on iOS and Android clients. Beyond that we have lots of exciting features in development, which you'll hear more about as they near completion.
My wife misplaced her keys yesterday. I politely enquired why she couldn’t put her damn keys in the same place every time she came in. She opined that if I wanted to be useful I should do less work with my mouth and more with my eyes. And so we set to work finding them.
As we searched, my mind naturally turned to A/B testing. It was clear from the start that we had two different strategies for finding the keys. She exploited her knowledge of where she had put her keys in the past, and her actions immediately prior to losing the keys. I explored more or less at random, arguing that her approach was proving unsuccessful and we should abandon our prior assumptions. Either approach on its own is inefficient, but together we were able to cover a large portion of the house in a relatively short period of time.
The exploration-exploitation dilemma lies at the heart of Myna. Myna constantly balances exploiting the variants that have worked well in the past against exploring other variants to see if they are in fact better. Myna can make an optimal tradeoff due to the power of the algorithms, and the relatively simple structure of the A/B testing problem.
Designing A/B tests involves a similar balancing act. We can exploit our knowledge of prior tests and best practices (such as these) to guide us when creating our own experiments. However, we must be cautious not to rely on those common tests too heavily. What has worked before, or for others’ customers might not work now or for ours. Similarly, exploring any and every idea that pops into our minds may be very interesting, and potentially bring dramatic results, but this has to be balanced with the risk of confusion or wasting time.
As you can see, once you start looking for it, you’ll find the exploration-exploitation dilemma everywhere.
No prizes for guessing who found the keys. (PS: it wasn’t me.)
For Mosaic type xmosiac
So I typed
xmosaic and discovered the web.
In 1994 Yahoo had only just been created, it would be a year before Amazon was online, and the research project that led to Google wouldn’t start for another two years. Yet despite the blink tags and “Under Construction” GIFs one thing was clear: the web was, and would be, something amazing. I was most struck by its essential equality. In those days anyone could create a web page and stand on equal footing with the rest of the world.
Fast forward 16 years and things have changed. The web is now big industry and ads, SEO, and other techniques are all used by businesses to give themselves an advantage. The Internet is dominated by large corporations, and it isn’t so easy for the little guy to be heard.
I happened to pick a field, machine learning, that has become one of the key differences between the big and small players. The big Internet properties have a substantial advantage by their use of intelligent algorithms to optimise their sites, product recommendations, and so on. It’s also clear that the small players can’t easily replicate this. Simply put, they don’t have the expertise to develop these systems in-house, and Google have already hired all the available PhD graduates.
This is where Myna comes in. We want to rebalance the Internet by democratizing access to the technology the big companies are using. Of course paying the bills is important, but fundamentally if we can push forward the industry we’ll have achieved something important.
If you’re not Google, Amazon, Yahoo!, or Microsoft (or even if you are) we hope you’ll give Myna a try. We’re just starting out on what we hope will be a long and eventful journey, and we look forward to growing alongside you.
Myna’s new API is out! A lot of discussion went into the new API, so it took a bit longer that we planned. We think it’s worth the wait &emdash; the new API is far richer and more usable than our original design. If you want to integrate Myna into your existing marketing systems, you’ll definitely want to check it out. Also take a look at the clients under development on our Github page, which will make integration easier.
We’re currently working on a new API for Myna. The new API exposes much more functionality, allowing, for example, experiments and variants to be created and removed. While it’s in development we’re soliciting feedback from the community. If you’re interested, read the documentation and let us know about any changes you think would improve it.
37Signals recently posted an interesting article on their use of A/B testing. Naturally I think they’d do a lot better if they used Myna. They include enough data in their post that we can run some simulations to quantify how much better Myna would do for them. Prepare to be surprised!
The first thing I wanted to look at was the impressive 102.5% improvement they got from the “Person Page”. In another post they said their sample size was about 42’000. With such a large improvement A/B testing is going to find the correct result at the end of the test. But how many signups extra signups would they have got if they sent those 42’000 users via Myna? It turns out Myna has a whopping 33% improvement over A/B testing. The graph below shows the improvement Myna makes over A/B testing for five thousand runs of the same experiment. You can see the average improvement is 33%, and it is never lower than 26%.
That’s the easy case, the rare change that leads to an enormous improvement. What about the 4.78% improvement Michael gives over Jocelyn? This is the bread-and-butter case for A/B testing, the kind of small improvement that adds up over time. Here things get interesting. Myna still improves over A/B testing, though the difference isn’t so dramatic. More interesting is that A/B testing gets it wrong over 80% of the time! Let me repeat that: given 42’000 samples and a 4.78% improvement over baseline, A/B testing makes the wrong choice 80.96% of the time. Myna, being an adaptable system, never gets stuck with a fixed decision.
What happens if we raise the sample size to 240’000 samples? Now A/B testing makes the wrong choice about 25% of the time, which is still quite poor, and Myna still averages a small improvement over A/B testing. There are two interesting questions we might ask here:
- How many samples do we need before A/B testing gets the right answer almost every time?
- What happens to the performance of Myna vs A/B testing when A/B testing makes the wrong choice?
To try to answer the first question I ran the same experiment but with 360’000 samples. I didn’t want to wait forever so I only repeated this experiment 500 times. Here A/B testing makes the right decision 90% of the time, which is probably acceptable for most people. Still, this is a lot more traffic than the 42’000 samples we started with.
For the second question I want back to the original setup and asked A/B testing to make a decision given 42’000 samples. I then ran A/B testing and Myna for an additional 60,000, 120’000, and 240’000 samples. I repeated this experiment 500 times. The average improvement of Myna over A/B testing is 1%, 2%, and 5% respectively. These results show how Myna can continuously optimise. We never need to make a hard decision, so we’ll never get stuck with the wrong decision. As we’ve seen this flexibility doesn’t cost us anything – Myna continues to outperform A/B testing even in the cases that are easy for A/B testing.
Here are the main points:
- Myna makes use of data as it arrives, so you can expect Myna to out-perform A/B testing when one option is clearly better.
- If you’re doing A/B test and using relatively small sample sizes you’re missing out on many small improvements because you simply don’t have enough data for statistically significant results.
- Myna won’t get stuck with the wrong decision when the data isn’t clear. Unlike A/B testing you don’t have to set the sample size in advance. Myna will keep on optimising indefinitely, catching all those small improvements that eventually add up but take a lot of data to determine.
If you want to try this at home, here are some details on my experimental setup. I assumed the base sign-up rate is 5%, which is typical of e-commerce applications. Except where indicated each experiment had 5’000 runs. I used the G-test with a p-value of 0.05 for A/B testing. I can’t tell you the secret sauce that goes into Myna’s algorithms, but in later posts I hope to go over some basic bandit algorithms, which are the core technology behind Myna.
Twelve thousand hits, over thirty emails, seven comments on the post, and over a dozen new beta testers. That’s what getting a blog post featured on Hacker Newsbrought us. We’ve been slowly developing Myna over the last few months, but this gave us the impetus to completely revamp the website. As you can see it’s still quite minimal, but it is certainly an improvement over the old site. Here are a few technical details that might be of interest if you’re trying to quickly build out a site:
The basic design of the site is Minima from Theme Forest. Well worth spending the $9 to get the general layout and some graphics.
We’ve heavily modified the Minima theme. It has a bunch of things we don’t need and didn’t support pages with lots of text. We used Less to get some abstraction over the CSS, which makes large changes a lot easier. Use it or use Sass. These tools are basically equivalent, so just pick one and move on.
We were lucky enough to find a public domain picture of a myna bird on Wikipedia. If we hadn’t, we’d simply have bought one from iStockphoto. We retouched the image a bit in Pixelmator, which does the bits of Photoshop we want at a price we can accept.
Now it’s time to get our new users live. Thanks, Hacker News!