7. What Should You Measure? Developing a Business Analytics Approach and Selecting Indicators

7.1. General Guidelines for Tech Hub Performance Indicator Selection

You should always remind yourself of the general principles of business analytics, as this will help you to avoid common mistakes. In infoDev's experience, tech hubs and incubators often struggle to follow general principles of performance measurement and evaluation (see figure), either because they lack awareness or because they do not prioritize this agenda early on, only to feel the consequences later.

Tips for indicator section

7.1.1.   Less Can Be More

There is often a temptation to measure everything that can be measured. More data is often confused with more information and, therefore, more knowledge. The greater the number of data points one has, the greater the capacity needed to make sense of it; and the greater the complexity in understanding what is really important to your specific mission. Further, as implementation is what we do every day, the majority of indicators tend to be developed for the output level, with a lack of attention to the measures that really matter in the long term, outcome indicators. Outcome indicators measure your added value. If you cannot demonstrate this value, the position of your tech hub becomes much more tenuous. For instance, the number of people in a contact data base or the number of people attending a prototyping competition can be meaningful if your goal is to reach a lot of stakeholders, but if your goal is to create start-ups, the link between these data points and your success is less clear (see box 9).

Box 9: Can Hackathons Create Start-ups?

Ideation and prototyping competitions have become popular in recent years. The notion that hackathons create a large number of functional mobile app innovations in a very short time is compelling. For business analytics, there is a temptation to count the number of innovators attending hackathons and the number of prototypes to infer that many innovations have been created. But infoDev's and others' experience has clearly shown that hackathons by themselves rarely create sustainable app innovations or viable start-ups. Instead, based on early lessons, mLabs, mHubs, and infoDev have quickly shifted towards innovation competitions that emphasize the depth of coaching and mentoring, including strong event preparation and follow-on support. While this means that fewer teams and individuals can be supported, the impact on start-up creation and success can be much higher. Such effects, however, have to be tracked over time by following the paths of innovation competition winners and those that did not win a prize. The amount of revenue they make and how much it increases might be a better measure for the success of a competition. See http://www.infodev.org/m2work and http://www.infodev.org/mobilebusinessmodels for more information.


Another problem with measuring too much is that your stakeholders will get tired of being evaluated, especially if you rely on surveys and interviews. You should ultimately arrive at a clean, clear, short list of qualitative and quantitative indicators that you want to measure, which should only contain those indicators that are definitely relevant for the impact you want to have and the value you want to provide.


7.1.2. Focus on Your Contribution

The standard approach to impact and performance evaluation is to do comparisons between a group of people that has used or benefited from the offered service (the treatment group) and a group of people that was similar at the outset but did not receive the service or support (the control group). The success that the treatment group has above and beyond the success of the control group can be claimed to be the contribution. This is similar to conducting real-world experiments. The logic is similar to evaluation approaches such as randomized control trials of health care interventions or A/B tests in (online) marketing.

However, it is extremely difficult to find control groups for the kinds of services that mHubs and mLabs offer. Stakeholders of innovation ecosystems are difficult to compare as they depend on each other in complex ways. It becomes tough to isolate a tech hub's contribution effect without substantial effort. While you should keep in mind your contribution to your clients' success, all of your clients' success is not a result of your support. Simply put, you should ask yourself: Where would my client be without the value that I have provided for them?

You should ask yourself:
Where would my clients be without receiving the value that I provided for them?
What are my clients doing differently as a result of the services (the value) I have provided to them?


7.1.3. Longitudinal and Before/After Data Is Key

The best alternative to a proper contribution analysis is to track data for the supported group over extended periods of time and make inferences about changes to it that were very likely due to the service you provided. For instance, you will know that your support was useful if a small start-up joins your program, having had flat revenues for a year or so, and three months after joining, its revenue increases by 30 percent. This also works for specific services within your portfolio: you can compare revenue evolution during the year before a new mentor was recruited and over the following year.

The same is true with perceptual and qualitative indicators such as satisfaction with your service that clients express in surveys and interviews. It is essential that you consistently and rigorously track key indicators and collect additional contextual information, which should be easier once you identify a short list of indicators as described in the first principle.

7.1.4. Use Quantitative Information but Target It Wisely

Numbers are a powerful communicator, and almost every business analytics approach will include some sort of quantitative element. However, making quantitative data reliable and meaningful is often not as easy as it seems. For instance, the number of people joining your events might give you a good idea of your reach in the local ecosystem, but it does not tell you much about the quality of connections that participants made during the event and how this changed their success or the success of your immediate clients. Quantitative data collection was also cumbersome for mLabs and mHubs because their activities and services change frequently depending on ecosystem needs, and a lot of effects happen through informal exchanges and through indirect paths.

So, you should ask yourself how the value you provide can be quantified meaningfully before you look at any data points just because they are easily available. What is the best proxy measure for the kind of value that you want to achieve? This exercise goes hand in hand with the first and second principle: you should find a handful of quantitative indicators that directly speak to your value proposition. Focus on those, tracking them consistently.

 

7.1.5. Don't Underestimate the Qualitative

Quantitative data will always be part of your business analytics, but they will rarely be enough for you to understand and improve your performance and value proposition. This is true because many of the effects of services that mLabs and mHubs provide unfold only over time and in indirect ways. You are trying to affect different elements of a complex innovation ecosystem, so some things will remain uncertain and impossible for you to fully capture and quantify.

This is where qualitative data can be powerful. First, it can help you to get a rough idea of the magnitude of your contribution: if your clients express that they could not have achieved what they did without your support, this will give you a clear indication that your program provides value - which is also why testimonials are generally such a powerful communicator.

Second, people's perceptions drive their actions. You can also use qualitative evidence to understand the effects of your support on your clients' and other ecosystem stakeholders' changing beliefs and motivations. For instance, local innovation ecosystems are in large part driven by buzz and excitement, and community building in particular relies on individuals' vision and drive. So if you can reliably track, for instance, that your clients' confidence to start a business or lead a community has changed as a result of your support, this is evidence of a valuable contribution that you would not be able to capture with quantitative data alone.

Third, other lessons and learnings, which are strongly context and situation-related, could be difficult to capture with quantitative indicators. For example, a certain mentor with a strong personality could work well for some of your clients and not so well for others, or a political incident could affect the ecosystem. Such information, very important to understand your clients' success, is unlikely to be uncovered through quantitative data.

 

7.1.6. Find the Right Mix Between Consistent and Flexible Measurement

Simple quantitative indicators provide most insight when measured consistently over extended periods of time. On the other hand, as an mLab or mHub that is starting out, you will almost definitely adjust your business model, or you might even pivot to an entirely different model, which will of course affect your choice of performance metrics. Overall, the mix between consistency and flexibility also means that the indicator selection process will be reiterated over time - very much in the spirit of the "Build, Measure, Learn" principle - and infoDev and you will need to flexibly adjust whenever this helps you better understand your value contribution and impact.

Altogether, this means that you will need to be consistent and flexible at the same time. In other words, you should differentiate between those key (quantitative) indicators that are most insightful when tracked continuously and consistently, and those where you can flexibly adapt without losing too many insights as you adjust your business model. This point emphasizes even more that the handful of key quantitative performance indicators that you want to track should be chosen carefully, as you will benefit from sticking with them over a long time period. Qualitative indicators, on the other hand, lend themselves more to adjustments; you can more easily incorporate additional contextual and exploratory information queries, for instance, taking interview questions in a different direction once you uncover something new or surprising.

 

7.1.7.Integrate Performance Tracking in Agreements with Your Clients and Partners

A big challenge with performance measurement is that there is wide agreement that it is relevant and useful, but often the people that hold the critical information (in particular, clients and partners) are reluctant to spend time and effort to share it. There is often a fundamental misalignment of incentives: your clients and partners might think that the information is relevant only to you and not to them, and they gain nothing by spending time sharing it with you.

This means that you need to go the extra mile to convince your clients and partners that your performance tracking will benefit them as well. It also implies that you have an obligation to keep information seeking from clients and partners to the necessary minimum. This is in line with the principle that you must only focus on a few key indicators and consistently track them. Lastly, you should also share the insights that you derive with your clients and partners, at least as far as they are relevant to their work. There is a chronic dearth of information in many if not most innovation ecosystems in developing countries, so your clients and partners are likely to appreciate your evaluation efforts if you show them the knowledge benefit that they get from you when they share the information you need.

It will also help if you discuss your performance measurement requirements with clients and partners early on. You should include specific clauses and requirements in formal support agreements that you set up with your start-up and entrepreneur clients. This will help them anticipate the time and effort that they are expected to invest in providing you with feedback and information on their progress. In particular if your business model relies on revenue sharing or royalties from start-ups and entrepreneurs, you will need to formally agree with your clients on reporting requirements even beyond the duration of their participation in your program.