Choosing Goals for A/B Testing

One of the most important decisions when designing an A/B test is choosing the goal of the test. After all, if you don’t have the right goal the results of the test won’t be of any use. It is particularly important when using Myna as Myna dynamically changes the proportion in which variants as displayed to maximise the goal.

So how should we choose the goal? Let’s look at the theory, which tells us how to choose the goal in a perfect world, and then see how that theory can inform practice in a decidedly imperfect world.

Customer Lifetime Value

For most businesses the goal is to increase customer lifetime value (CLV). What is CLV? It’s simply the sum of all the money we’ll receive in the future from the customer. (This is sometimes known as predictive customer lifetime value as we’re interested in the revenue we’ll receive in future, not any revenue we might have received in the past.)

If you can accurately predict CLV it is a natural goal to use for A/B tests. The performance of each variant under test can be measured by how much they increased CLV on average. Here’s a simple example. Say you’re testing calls-to-action on your landing page. The lifetime values of interest here are the CLV of a user arriving at your landing page who hasn’t signed up, and the CLV of a user who has just signed up. If you have good statistics on your funnel you can work these numbers out. Say an engaged user has a CLV of $50, 50% of sign-ups go on to become engaged, and 10% of visitors sign up. Then the lifetime values are:

  • for sign-ups $50 * 0.5 = $25; and
  • for visitors $25 * 0.1 = $2.50.

The great thing with CLV is you don’t have to worry about any other measures such as click-through, time on site, or what have you – that’s all accounted for in lifetime value.

Theory Meets Practice

Accurately predicting CLV is the number one problem with using it in practice. A lot of people just don’t have the infrastructure to do these calculations. For those that do there are other issues that make predicting CLV difficult. You might have a complicated business that necessitates customer segmentation to produce meaningful lifetime values. You might have very long-term customers making prediction hard. I don’t need to go on; I’m sure you can think of your own reasons.

This doesn’t mean that CLV is useless, as it gives us a framework for evaluating other goals such as click-through and sign-up. For most people using a simple to measure goal such as click-through is a reasonable decision. These goals are usually highly correlated with CLV, and it is better to do testing with a slightly imperfect goal than to not do it at all due to concern about accurately measuring lifetime value. I do recommend from time-to-time checking that these simpler goals are correlated with CLV, but it shouldn’t be needed for every test.

CLV is very useful when the user can choose between many actions. Returning to our landing page example, imagine the visitor could also sign up for a newsletter as well as signing up to use our product. Presumably visitors who just sign up for the newsletter have a lower CLV than those who sign up for the product, but a higher CLV than those who fail to take any action. Even if we can’t predict CLV precisely, using the framework at least forces us to directly face the problem of quantifying the value of different action.

This approach pays off particularly well for companies with very low conversion rates, or a long sales cycle. Here A/B testing can be a challenge, but we can use the model of CLV to create useful intermediate goals that can guide us. If it takes six months to covert a visitor into a paying customer, look for other intermediate goals and then try to estimate the CLV of them. This could be downloading a white paper, signing up for a newsletter, or even something like a repeat visit. Again it isn’t essential to accurately predict CLV, just to assign some value that is in the right ballpark.

Applying CLV to Myna

So far everything I’ve said applies to general A/B testing. Now I want to talk about some details specific to Myna. When using Myna you need to specify a reward. For simple cases like a click-through or sign-up, the reward is simply 1 if the goal is achieved and 0 otherwise. For more complicated cases Myna allows very flexible rewards that can handle most situations. Let’s quickly review how Myna’s rewards work, and then how to use them in more complicated scenarios.

Rewards occur after a variant has been viewed. The idea is to indicate to Myna the quality of any action coming from viewing a variant. There are some simple rules for rewards:

  • any reward must be a number between 0 and 1. The higher the reward, the better it is;
  • a default reward of 0 is assumed if you don’t specify anything else; and
  • you can send multiple rewards for a single view of a variant, but the total of all rewards for a view must be no more than 1.

Now we know about CLV the correct way to set rewards is obvious: rewards should be proportional to CLV. How do we convert CLV to a number between 0 and 1? We recommend using the logistic function to guarantee the output is always in the correct range. However, if you don’t know your CLV just choose some numbers that have the correct ranking and roughly correct magnitude. So for the newsletter / sign-up example we might go with 0.3 and 0.7 respectively. This way if someone performs both actions they get a reward of 1.0.

That’s really all there is to CLV. It a simple concept but has wide ramifications in testing.

New Features Across the Board

Today we're announcing the next version of Myna. This brings a lot of improvements, some of the highlights being:

  • you can associate arbitrary JSON data with an experiment. You could use this, for example, to store text or styling information for your web page. This allows you to change an experiment from the dashboard and have the changes appear on your site without redeploying code;

  • Myna is much more flexible in accepting rewards and views. This enables experiments that involve online and offline components, such as mobile applications;

  • we have a completely new dashboard, which is faster and easier to use than its predecessor.

If you want to get started right away, login to Myna and click the "v2 Beta" button on your dashboard. This will take you to the new dashboard, where you can create and edit experiments. Then take a look at our new API, part of an all new help site.

Alternatively, read on for more details.

The New API

The changes start with our new API. The whole model of interaction with the API has changed. The old model was to ask Myna for a single suggestion, and send a single reward back to the server. There were numerous problems with this:

  • Latency. It took two round trips to use Myna (one to download the client from our CDN, one to get a suggestion from our servers).
  • Rigidity. Myna entirely controlled which suggestions were made, and only these suggestions could be rewarded.
  • Offline use. Myna's model didn't allow offline use, essential for mobile applications.

The new API solves all these issues.

Instead of asking Myna for a suggestion, clients download experiment information that contains weights for each variant. These weights are Myna's best estimate for the proportion in which variants should be suggested, but clients are free to display any variant they wish. The client can store this information to use offline or to make batches of suggestions.

Views and rewards can be sent to Myna individually or in batches, and there are very few restrictions on what can be sent. If you want to send multiple rewards for a single view, that can be done. There are no restrictions on the delay between views and rewards, so those of you with long funnels can use Myna.

Since you don't have to contact Myna's servers to get a suggestion, all data can be stored in a CDN. This means only a single round-trip, to a fast CDN, to use Myna.

These features combine to make Myna faster for existing uses on websites, and also to allow new uses, such as mobile applications that work offline.

Another major change is to give you more control over experiments from your dashboard. To this end you can associates arbitrary JSON data with your experiments. You can use this data to set, say, text or style information in your experiments. Then any changes you make on your dashboard, including adding new variants, will be automatically reflected in your experiments without deploying new code.

We have also improved the deployment process. Instead of pulling experiments into a page one-by-one, we provide a single CDN-hosted file that contains all your active experiments and the Myna for HTML client.

Finally, we've updated the algorithm Myna uses. It behaves in a more intuitive fashion without sacrificing performance.

The new API is live and is being used in production right now.

New Dashboard

The old dashboard wasn't up to scratch. It was difficult to use and wasn't able to support the new features we're adding to the API. As a result we've created a completely new dashboard. Click the "v2 Beta" tab to access it.

The dashboard is still in development, so there are some rough edges. However it's usable enough that we're releasing it now.

New Clients

Along with the new API we are also developing new clients. Most of you integrate with Myna in the browser, and here we have new versions of Myna for Javascript and Myna for HTML.

Possibly the most exciting new feature is the inspector, which allows you to preview your experiments in the page. Here's a demo. To enable the inspector, just add #preview to the end of the URL of any experiment that uses the new version of Myna for HTML or Myna for Javascript.

Documentation is still in progress for these clients. Look here for Myna for HTML, and here for Myna for Javascript.

What's Next?

There is still a lot of work to do. In addition to finishing the dashboard and documentation we are working on iOS and Android clients. Beyond that we have lots of exciting features in development, which you'll hear more about as they near completion.

Myna for Mobile

Myna on mobile

The future of the Internet is mobile shaped.  Consider this: the mobile Internet is growing eight times faster than the desktop Internet, and there are already more smartphone users than PC users.

Mobile strategy is important for any Internet business, but it’s also a challenging market to operate in. At Myna we’re trying to make it a bit easier. Over the next few weeks we’ll be extending our real time A/B testing technology to mobile, giving Internet businesses unequaled ability to understand and optimise their offerings across the mobile and desktop Internet.

The Problem

Compared to the web, mobile business is hard. Consider the following:

  • Ranking is everything on both the iOS and Android app stores. Downloads quickly drop off as ranking falls.
  • New apps have a short window when they are highly ranked, and if they can engage with users during this period they can maintain their ranking.
  • Users hardly ever update their apps. Furthermore, if you’re targeting iOS, the lengthy App Store review process ensures that  fixes are too late to have a significant effect.
  • Limited data, only the total number of downloads and app store feedback is available.

This means that most businesses targeting mobile get only one chance to get it right — when they first launch — and they have to build their product before they receive any real feedback. Anyone who has run a business knows that nothing destroys assumptions faster than real users. In the last decade, online businesses have learned how to deliver steady improvements through controlled testing and rapid iteration. Can mobile businesses do the same?

The Solution

We’re not the first to recognise the need for fast iteration in mobile apps. Trigger.io, Phonegap, and others provide HTML5 frameworks for building apps that are essentially websites. This allow updates to be pushed straight to mobile devices, a process known as hot deployment. Developers gain a fast feedback cycle, and they can use familiar analytics and testing tools, including Myna, to optimise their application. However, this approach cannot match the look-and-feel and performance of a true native application, which rules it out in many cases.

We’ve heard from a number of existing customers about their frustrations testing on mobile, and have decided to address it directly. We’re developing native Myna clients for iOS and Android, and also extending our API to allow you to push changes to your tests straight from Myna’s dashboard. This gives the best of both worlds: native application performance and hot deployment, bypassing the issues with app store updates.

We’re very excited about this new addition to Myna, which will benefit our existing web users as well. Myna will become the first testing platform that allows cross-channel testing, backed by super-fast bandit algorithms.

We will be launching Myna for mobile in closed beta. If you would like to be one of our beta testers you can sign up by emailing hello@mynaweb.com.

Growth Hacking and the Bandit Problem

“Growth Hacking and the Bandit Problem” is a recent talk by yours truly introducinga recent talk by the awesome Noel introducing multi armed bandits as a superior way of A/B testing. In case you missed it we decided to write it up as a blog post.

Our Hero

We begin by introducing our main character and hero of the story, the growth hacker. He is driven by one thing and one thing alone: pushing growth ever upwards and ever rightwards. Swoon.

To do this he follows 3 simple steps: Build, Measure, and Learn, a process handed down by the prophet of growth, Eric Ries. These 3 steps give our hero what he needs: a structured process to drive growth. Starting at the top, he gets something built, let’s say a new sign up page. He then takes some time to collect data and measure its effectiveness. Once this is over he sits down with his data and he learns, making a decision based on those results and informing the next iteration of the cycle. Round he goes again!

Our growth hacker will use his wide range of skills at each stage of the cycle but his main objective is always achieving rapid growth. The speed at which he can get round this cycle will determine how fast and how far his metrics, and ultimately the business, can grow.

Faster is Better

Driven by a need for speed, our growth hacker takes a look at each step in his engine of growth to see where he can go faster. He starts off with build. Hmm there doesn’t seem to be much he can do here. Our growth hacker’s already pretty agile on the dev front.

Learning already seems to happen pretty fast, once he’s got all his data together. But measure? Now measure seems like a place where he might be able to speed up. At the moment he’s using A/B testing. Collecting all the data he needs to make a sound statistical decision takes a long time.

Eureka!

What if measuring and learning could happen together? What if we could turn our 3 step process into a 2 step process and speed it up dramatically? We could change our metrics chart to look like the green line instead of the orange, allowing our growth hacker to iterate and optimise as fast as he can! Well surprise surprise, you guessed it, our hero the growth hacker has just discovered the multi armed bandit, a way to drive growth faster than ever before!

 

The Multi Armed Bandit (Growth Hackers’ Secret Sauce)

“Woah there,” I hear you say. “Let’s just hold on a minute here and have a bit of background into this multi-armed bandit. Where is it from and what’s it all about?” Well, like all good secret powers the multi-armed bandit started off as a problem; The Bandit Problem.

Imagine walking into a casino. You head straight for a room full of slot machines or, as they’re called in the US, one armed bandits. You’re a clever egg, so no doubt you’re thinking that some of these machines are going to pay out more than others. You want to make sure you maximise your reward by finding and playing the bandit that pays out most. This is the bandit problem.

After a long hard think, and a lot of maths, you come up with a formula that helps you to find the machine that pays out most as soon as possible. Hurray! This means you don’t waste your money trying other machines that pay less often. The formula is called a bandit algorithm, and with this in hand our growth hacker receives the mighty powers of the multi-armed bandit!

Now Back to the Story

With his trusty multi armed bandit at his side our growth hacker can now set to the task in hand. Today he’s increasing conversions on a signup page. With 3 variants of the web page to choose from the MAB shows a different page to each visitor, and its reward comes when a visitor clicks on the orange button and converts. The scores are totted up and the process is merrilly repeated each time a new visitor comes to the site.

Two key Ingredients to the Secret Sauce

So far so good, but doesn’t this sound familiar? Up to this point our MAB has been purely exploring, trying variants at random and totting up their scores which is the main feature of A/B testing. But remember the goal of a MAB is to maximise the total reward, so to do this it’s going to bring another element into the mix, exploitation, which is showing the variants that have worked best in the past. This delicate balancing act between exploration and exploitation plays out for the duration of the test as the multi armed bandit happily goes about measuring and learning, at all times working to maximise total reward.

What are My Options?

We’ve just sketched an algorithm known as E-Greedy. When it comes to MAB there are lots to chose from, such as E-Greedy, Thompson Sampling, UCB-1 or Myna. Not all will perform in the same way or deliver the same results, as you can see in the chart below: (Ooh look, Myna’s the most successful. Who would have guessed?!)

Real Life

Now I know what you’re thinking: fancy graphs based on simulated data are all well and good but I want to hear about some real life results. Look no further than one of our customers: Vizify, a startup working to create beautiful online portfolios. In order to improve their user engagement they decided to deploy Myna to optimise their email subject lines. Because Myna is so efficient with data, in just a few days Vizify had received a 500% increase in clickthroughs (pretty impressive for a startup with small amounts of traffic).

With Great Power comes Great Responsibility

When using MABs there are a few things to bear in mind:

Workflow
Your workflow will change dramatically. It’s going to become simpler as Myna is going to do all the work for you. It will be faster, because its so efficient with data you’ll get results at lightening speed. It’s also way more flexible. You will wave goodbye to setting parameters in advance (experiment length and p-value), and can add and remove variants at any time, testing almost anything you set your mind to!
Defining rewards
The ideal reward measure for any A/B test is most likely customer lifetime value but you probably can’t measure this very quickly. You need to have a fairly fast feedback cycle so the algorithm can adapt in a reasonable time period. Using simple measures like conversions are fine, but with any test you should check that this correlates with your true performance metrics.
Stable Preferences
The algorithms we’ve discussed only work when users have stable preferences. We don’t mean that all users act in the same way, but rather that their behaviour is similar in aggregate and stable over time. Broadly speaking, we assume what works today will continue to work tomorrow. For UI elements this is generally the case, but it is not, for example, true for news items where the value of the story is strongly time dependant.

And that’s it. Our growth hacker’s secret sauce, the multi armed bandit has been transformed into an almighty T-Rex. Jump on its back and ride off into the sunset and do what growth hackers do best: grow, fast! RAWWWRR!

First steps in A/B testing part 1 of 2: Process

At Myna we’re all about considered, methodical approaches to website optimisation. But for newbies, A/B testing can seem like a lot of hard work: fussy micro-management of page elements and aesthetics in pursuit of minor gains.

How do you know where to begin? How do you find the tests that make a big difference?

To get the most out of A/B testing, you need a plan. Don’t just sit down in front of your website and start testing the colour of a button – start by establishing what you want to improve. For that you need statistics, so sit yourself down, fire up your analytics program of choice, and let’s get started.

This is the first in a two-part series of posts outlining how to build a testing plan. In this post we’ll talk about testing the process you’re guiding your users through. In the next post we’ll talk about testing the positioning of your product in the market.

Process

When testing process, you are assuming that your product or offer is attractive to your users, but something is getting in the way of making a sale.

Look at the paths you want users to take through your web site, find out where you’re losing them, and test around this.

Where are you losing users? Do they hit your landing page and then bounce straight off? Do people fail to complete the sign up process? Or do people get distracted and wander off during checkout? Possible culprits are:

  • your landing page lacks a strong call to action
  • your sign-up process is too long
  • your checkout process is too complicated
  • your product pages don’t offer a clear path to purchase

Your analytics will tell you what you need to know – the key terminology is a funnel or sales process. This is a sequence of steps that you define that lead users to your conversion goal. You can set up funnels using conversion goals in Google Analytics as well as other analytics packages to track the flow of users through the desired steps.

Here is an example sales funnel based on our sign-up process at Myna. Note that the conversion rate figures are fabricated for illustrative purposes.

When people sign up to Myna, we push them through four steps: they choose a plan, enter account details, enter billing information, and arrive at their shiny new Myna dashboard. At each step some percentage of users proceed onwards, while other users leave and go elsewhere. Our goal is to maximise the number of people who get all the way through to the end.

Example sales funnel for Myna
(conversion rates for illustrative purposes only)

The key information provided by the funnel is where we are losing users. In the example there are two weak steps: the landing page (step 1) and the billing page (step 4). We immediately know where to start optimising:

  • The landing page is likely to have a low conversion rate because not all users arrive at our site wanting to sign up. Some users have already signed up and want to log in; other users may want to read our blog or contact us. Nevertheless, we can run a few tests here to try to increase this conversion rate. It’s not hard to come up with several ideas around messaging, design and layout.
  • The final step of the pipeline, billing information, is a great candidate for testing. By the time users get to this step we know they definitely want to sign up to a plan. Why aren’t 60% of them doing so? This is a great candidate for testing ways to minimise friction on the billing form: collect fewer fields, optimise the layout, and provide the user with immediate feedback if they get something wrong.

The conversion rates on a funnel provide useful information going even beyond testing. For example, in Myna we sign people up to our free plan in step 3, before they enter billing information. This way, we allow users to use Myna even if they choose not to give us their credit card details. We also allow users to opt in to our mailing list in step 3, providing us with a way to contact them in the future.

 

In conclusion, analysing funnels and traffic around your site provides the key to testing process. Once you understand where you can make the biggest gains, it’s much easier to get into the specifics of tweaking layouts, images, and buttons. At every level you should make hypotheses about why people are leaving your funnel and address them by testing new ideas.

It’s important to note that sales funnels are often more complicated than the one we’ve described above. Our sales process at Myna involves many more steps on and off our web site, encouraging users to come and try our product, providing support setting up tests, and helping them get their first set of results. Your web sign-up or checkout process is a great place to start, however. As you get used to testing, you can expand outwards to look at the big picture.

Finally, testing process is only useful if you’re managing to attract users to your site. In order to do that, you’re going to need a marketing strategy. A/B testing can help greatly with marketing as well, as we’ll see in the second post in this series on positioning.

New addition to the team

The eagle-eyed among you may have noticed that we’ve got a new addition to the team. Please welcome our new mascot, Monique:

As the old site design clashed with her beautiful colouring, we threw it all away and gave the site a bit of a polish. Please excuse any empty paint cans around the place.

Monique is a Balinese myna bird, a critically endangered species. Though exact population numbers are unknown, efforts are being made to protect and strengthen the wild population, which has been severely diminished by poachers.

The Bali Myna can also be found on Indonesian currency, and you can read more on birdlife.org.

Five things you could be A/B testing in addition to the colour of your buttons

One of the most popular questions for new A/B testers is “What should I test?“. There are many blog posts on the Internet that describe testing strategies, effective tests for landing pages, the psychology of testing, and so on. Rather than describe about the usual candidates (hero image, feature list, call to action – you can test them all), I thought I’d provide a fresh perspective on some creative tests that you may not have considered.

1. Signup process

How simple should your landing/signup pages be? If you take away distractions, and give a clean, simple interface that focuses the user, will you see an increase in signups? If you take away too much relevant information, will it turn users off to your site and make them less likely to complete the signup process?

How simple should your signup process be? If you remove distractions and provide a clean, simple interface that focuses visitors’ attention, will you see an increase in signups? Take away too much, or force visitors to sign up before they know what you do or why you need their information and you may scare them away.

2. Information capture

You can test anything along your pipeline from customer acquisition to post-purchase follow-up. Most people like to start with the front-end of the pipeline: SEO to make their web site more visible, and web site optimization to make their landing pages more effective.

Collecting information from potential customers is a tricky business. Collect too little and you’re missing out on potentially useful insights. Ask for too much and you risk putting people off filling in your form. In addition to testing the visual aspects of form design (label positioning, immediate feedback on potential errors, and so on), one of the most useful things you can do is test the amount and format of data you’re trying to capture. After all, all the data in the world is no use to you if it never leaves your customers’ heads!

3. Email content

Communication with your customers is critically important, and it’s not just your website that matters. Emails form a vital part of many businesses’ communication strategy. Take your business’s voice as an example. Do your users prefer more personable, friendly style? Or do they want reassurance that they’re dealing with professionals? Test the style, language and layout of your email content, and measure the response rate or traffic generated over a number campaigns.

4. Features and pricing

Features and pricing are the two main things that affect your value proposition to customers. Fortunately, you can test both of these to see where your value sweet spot lies.

One great way of testing pricing is to test discounts. For example, you could offer one group of customers three months of free use of your product, and another group 25% off for a year. These offers are financially equivalent but offer different value in the short term.

The charitable promotions web site Humble Bundle operate another great example of price testing. They use A/B testing to prompt customers with different suggested donations, to determine the best times to add values to their deals.

Instead to testing pricing, you might consider testing features of your online product. What would happen if you gave users twice the space to upload photos? Would it lead to a financially beneficial increase in sign-ups? Why not run an A/B test to find out?

5. Checkout flow

Many of the tests we explored above for signup processes can also be applied to checkout flow. How much should you focus customers’ attention on completing checkout and paying for the product? What order should you collect payment and delivery information? Is there a good opportunity for additional information capture here?

You can be a lot more flexible in tests by realising that, with the right testing tools, conversion goals don’t need to be “yes/no” affairs. Some tools let you assign secondary goals or, even better, numerical goals that let you stipulate how good the outcome is. For example, in a checkout flow you might instruct your testing tool to put more weight on a conversion if the user has more items in their basket, returns to a basket after saving it, chooses a faster delivery option, and so on.

 

Hopefully these five crazy off-the-wall examples will inspire you to use A/B testing to improve your product or site in new and unconventional ways. We’ve seen all of these and more at Myna, and we’re constantly being surprised by the resourcefulness and creativity of our users. This isn’t to say that we don’t advocate conventional testing of strap lines, hero images, and button colours – these tests are and always will be perfectly valid. No matter how much you’ve tested your landing pages, though, it’s important to realise that there are always new optimizations to make. Who knows – your next test may just bust you out of a local maximum and take you to a whole new level of success.

A Myna by any other name

What does a small bird, native to South-East Asia, have to do with A/B testing?

Myna birds. Left: Common Myna, Right: Bali Myna
Left: Common Myna, photo by Yotam Orchan, Assaf Shwartz (source)
Right: Bali Myna, photo by Jcwf at nl.wikipedia (source)
Images licensed under the GNU Free Documentation License

The answer? It’s all in the algorithms.

Some of the the sub-species of the myna (or mynah) are considered talking birds, i.e., birds that can mimic human speech.

As a myna (the bird) mimics human words, and is rewarded by positive reinforcement from its owner (or delicious, delicious seeds), it learns which are the most desirable words or phrases and knows to repeat them more in future.

In much the same way, as Myna (the revolutionary new A/B testing solution) repeats your variants, and is rewarded by successful conversions, it learns which is the most successful variant and knows to repeat that more in future.

As the page views roll in on a new test, we like to picture an excitable bird, squawking away and gobbling down treats, learning not to swear in front of polite company.

Even in our short lifetime, we’ve seen multiple spellings of the word: from minah to miyna, mynor to mhyna (what can I say, sometimes I just type too fast). While both myna and mynah are considered valid spellings for the bird, the A/B testing is spelled only Myna.

Countdown to launch

We’re pleased to announce that we’re finally coming out of beta, which means the introduction of our least-requested new feature: pricing plans. These will come into effect on 15th December.

We’ve got plans for all sizes of business and website, including a permanently free plan for low-volume users or sample tests. Sign up now and start A/B testing for free - you can upgrade whenever you’re ready.

All our existing beta users (and any new sign-ups) will have the opportunity to choose a plan before December 15th. On that date, everyone else will be transferred onto the free plan. Don’t worry! All existing tests and data will remain intact.

We’ll make further announcements on billing closer to the time, but for now, all the details are available on the Myna pricing page.

Calling all designers!

(…and bloggers, authors, and web admins…)

We are pleased to announce the launch of Myna for HTML – a simple new way to integrate Myna on your web site using 100% pure HTML. No coding required!

Up until now, integrating Myna has required some coding knowledge. This meant Myna was excellent for developers but posed a barrier for people who don’t code.

All that has changed – Myna for HTML is integrated directly into our dashboard so absolutely no programming is required. All you need is a little knowledge of HTML and CSS and a burning desire to optimise your website. (A Myna account will come in handy, too.)

Using Myna for HTML is a simple three-step procedure:

  1. copy-and-paste a snippet of code from our Dashboard;
  2. tag up your variants and conversion goals using HTML 5 data attributes;
  3. there is no step 3.

Full documentation is provided in our help pages, complete with live demos of common use cases. As always, if you need any help integrating Myna on your site, please get in touch!