February 2011 Archives
- First, I like to tie everything back to distribution, i.e. people using your product. You should be tracking quantifiable long-term metrics, e.g. number of signups, uploads, shares, whatever. For my search engine startup, my main metric is number of direct searches/time.
I believe tying back to distribution entails rephrasing the question what feature or bug should I work on next to if I work on X, will it result in significantly greater distribution, and if so, how much? Note that greater distribution can come from either existing users using the product more or new product use from new users.
Once answered, you can then roughly focus on areas you think will have the highest marginal benefit with regards to distribution. For example, something directly related to distribution, e.g. a marketing effort or social/sharing feature, probably has a high marginal benefit.
You can't forget word of mouth distribution though. Most of my distribution to date has been via word of mouth, and I believe that is due to product. It is a great place to be for a product-oriented person like myself when you can work on product and it also helps distribution. But you still can't get sucked into the false belief that product features trump direct distribution features all the time, because they don't.
Similarly, how many of your users (including potential users) will the feature effect? Something that appears on a site all the time generally should take precedence over something that only occurs in a small corner of it. Of course that is mitigated by severity and impact, but you get the idea.
- After distribution, the second principle I apply is using real feedback to substantiate my decisions. Such feedback could be actual data, or, more often in my case, user interaction. Using these data sources you can really get more of a sense about the marginal benefit beyond your initial guessing positions.
You usually do need some kind of user base to get these kind of signals, but often not as big as you might think. I've tested various feedback button sizes, shapes and positions and found that moving them around has a dramatic effect on the amount of feedback received. Same goes to links to forums, chat rooms, etc.
I encourage all startups to essentially maximize these input streams since the data is so valuable. Using feedback appropriately greatly improves the chances you are actually working smarter (as opposed to just thinking you are working smarter).
- Third, I like to put a lot of minimum viable products (MVPs) out there. MVPs are not just for the initial version of your product. Maybe they should be call minimum viable features (MVFs).
In other words, I like to ship code. It's not always the prettiest code, but it allows me to move on to another feature/bug and let the first one simmer.
By letting it simmer you allow yourself and your users to experience it in some form, and suggest incremental improvements that you often didn't think of at design time. I absolutely love this aspect of shipping code. Again, the bigger the user base, the bigger the effect.
The other, perhaps bigger reason, to do a lot of MVFs is it isn't readily apparent a priori what is going to work and not work with regards to distribution. It's sort of like A/B testing in that it can be non-intuitive. By planting a lot of different seeds, you are spreading your risk a bit hoping that some of them will blossom, or more often than not, prompt you to think of new related or combined efforts that eventually turn into something meaningful.
- Fourth, I batch things. There are so many silos of code and each takes some time to really get into and be effective. So I try to wait until there is a decent amount of stuff to do in that area before jumping in, which maintains efficiency.
- Fifth, I inject randomness or happiness or whatever you want to call it. Since none of this is an exact science, I generally work on things that strike me in the moment as interesting, given the above constraints. That's not always possible but it contributes to my not getting burnt out.
- Build something parallel on the Web and leverage that user base, which can be grown through SEO/SEM/Social media and other Web distribution channels.
- Make an API that can be embedded in apps and leads to downloads.
- Partner with large Web properties and get them to push your app.
- Tie your app to off-line interactions.
- Build inherent mobile virality into the app, i.e. mobile to mobile invites/shares.
- Buy downloads in the app store as well as in-app ads and monetize enough where this is worthwhile.
- Application deadline: next Friday, February 18 (at 11:59PM EST).
- Event date: March 16 (evening).
- Startup area: software, e.g. Internet/mobile/gaming/etc.
- Needed: at least a working demo.
- Application link: http://ye.gg/oaf.
- Cost to apply: $0!
The format for presentations is a five min demo followed by a five min Q/A session. Afterwards there will be plenty of time for talking with these angel investors (besides me):
I get a lot of feedback around adding DuckDuckGo (a search engine) to users' Web browsers. I thought I would synthesize that feedback in hope that these usability issues might be addressed.
- Make AddSearchProvider and IsSearchProviderInstalled functions work as one expects them to, i.e. not to fail silently, always return false/true etc. If you do have a concept of a default engine, let IsSearchProviderInstalled see that too, or add another function to query that boolean value.
- Make the dialog box that results from AddSearchProvider allow you to a) make it the default/current search engine; and b) change the url string via an advanced section (that offers useful help text).
- Use the well-established opensearch meta tags appropriately, i.e. to suggest engines to add (as opposed to ignoring them or adding them automatically).
- Make it obvious how to change providers and edit them after they have already been added. Executing AddSearchProvider could pop an edit dialog, for example.