Getting pure feedback early, without damaging brand by exposing unpolished features

Getting useful feedback early in the development cycle gets you to a high-quality feature faster.

And the most useful feedback can be in the form of users actually using something in the wild, instead of in a contrived setting. If you see users use a feature during their actual use of the product, their experience is “honest” because they are using their own valuable time, and they aren’t pulling any punches or saying “yes” to things they don’t really mean.

The problem I’m running into, is that to get this sweet feedback as soon as possible, I have to show features unpolished.

So my question is, how does one balance those 2 concerns? (getting early, uncut feedback via actual use of a feature, vs. exposing unpolished features and thus showing users a lower standard of quality)

For large companies, I know they sometimes show a small subset of users new features before releasing to everyone, in this way getting great feedback, but not risking their brand to their entire user base. But my question is for the situation where you have a low volume of users (like an early B2B product), so you can’t afford to use them as “guinea pigs” because they are each very important.

1

But my question is for the situation where you have a low volume of
users (like an early B2B product), so you can’t afford to use them as
“guinea pigs” because they are each very important.

Off course this is true but does have a 2-sided effect. If you don’t test you cannot improve your product which will also negatively affect your user.

That being said: The testing does not have to be like gambling. It can be a measured exercise. The bigger companies you refer to generally are very good at measuring all interaction between the users and the app.

So, for example, they FB measures how many video’s you watch on your timeline. When they update the timeline software with a new release they do this:

  1. Before: They always measure the statistic so they know the “normal”, let’s say 5 videos.
  2. Send the update to a small % of the users
  3. Measure how many video’s they watch, let’s say 2, bad update
  4. After some measuring they automatically decide (they have a team controlling this):

    if(currentVideosWatched < 50% * videosWatchedBefore) {
        alertDevelopersToFix();
        // FB doesn't seem to revert the release but I know of systems who automatically revert the update to the last stable version which may work well for you.
    }else{
        increaseAmountOfUsersForThisVersion();
    }
    

You can make this process as manual or automated as you want.

That way they prevent really bad issues from happening. You can do quite the same thing in a more simple way off course. Read more here: http://arstechnica.com/business/2012/04/exclusive-a-behind-the-scenes-look-at-facebook-release-engineering/2/

To convert is to your case:

The problem I’m running into, is that to get this sweet feedback as
soon as possible, I have to show features unpolished.

It depends on your development method off course but in general you should be able to deliver features which “look” good. With that I mean: You should not deploy pieces which are just not done yet. That doesn’t make sense. You also don’t have to deliver fully polished versions. That’s the other side.

This is a bit of a professional assessment you have to make. In general: Build a less complex / complete feature and launch it. When it gets traction you can further improve it. Most important is that you get the metrics so you don’t have to guess around but you can make real decisions based on facts.

So my question is, how does one balance those 2 concerns? (getting
early, uncut feedback via actual use of a feature, vs. exposing
unpolished features and thus showing users a lower standard of
quality)

Try to deliver small good looking features and their initial measurements. That’s the baseline. If you want you could even label them in the interface with “beta” flag if you want (though that might change user behavior).

On a small B2B I would personally flag the users where you want to send the beta updates to. So you can include people who like to give feedback first.

The other real alternative you have is to hire a bunch of real testers. The problem with this is that they need to be experienced with the B2B field you are working in. Otherwise they won’t discover the business logic bugs but only the technical ones.

Offer two releases on your website.

One release is the “The official release, for most typical users.”

The other release is “Our cutting-edge, pre-release build for those users who are adventurous enough to see what direction we are going and perhaps influence that direction by providing constructive feedback, and who don’t mind a little dust.”

Find a volunteer.

A company I worked for had many tech-illiterate customers and a few tech-savvy ones. The tech-savvy ones were happy to be the first to receive and provide feedback about the latest versions. They knew they were our guinea pigs, and they loved it.

A variation on the first answer: the company I worked for had a dedicated testing instance that customers could use knowing this was actual pre-release code and testing data (testing users could add their own data just as they could in production). Mostly it was used for compatibility testing, making sure that their software worked correctly with our new software. Since it’s known to be in testing, it seems like a straightforward way to do what you’re wanting.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *