Business Analytics Toolkit

8. How Should You Measure? Putting Your Business Analytics Framework into Action

Once you select your performance indicators, you will need a measurement approach that is feasible and blends smoothly with your roll-out. The best indicator selection is useless if you cannot track them reliably throughout implementation. Given that tech hubs are so diverse and usually implement a wide array of services, putting measurement into practice can be where things go wrong. For instance, it might look realistic and useful on paper to collect a large number of indicators, but then some metrics might turn out to be difficult or impossible to track, there can be evaluation fatigue and low response rates from clients and partners, or the collected information might be less useful than originally envisioned. Your business model, the agreement you have with infoDev, and feasibility conditions that you face in your local context will always determine the best approach to put your business analytics into action. This section will introduce you to the basics.


8.1. Establishing a Performance Measurement System

You should set up a performance measurement system that is a master repository and documentation of all indicators that you track and all analyses you conduct. A performance measurement system is at the heart of any business analytics approach. Over time, the system also becomes a resource for you to detect effects of larger trends in the ecosystem or results that change due to strategic decisions you make. The system will make all (codifiable) analytics available to you and others to whom you give access. You can see this as your performance and value creation data base, which you fill with relevant and increasingly insightful bits of data, information, and analysis.

Box 12: The CMIP Performance Measurement System
The Caribbean Mobile Innovation Project (CMIP) was designed to foster entrepreneurship and the start-ups ecosystem with a region-wide approach, nurturing early stage mobile app innovators across the Caribbean islands. Given the complexity of the project, infoDev decided to conduct substantial feasibility scoping and business planning upfront. In 2014, infoDev gave a grant to the UWI consortium that covered a large part of the cost of the project for four years. infoDev, together with the Canadian government, also developed a framework of key performance indicators (KPIs) that would guide the CMIP's priorities. The KPIs firmly set the consortium's focus on project sustainability, setting expectations of increasing effectiveness and co-financing


The main function of the performance measurement system is to provide you with a consistent structure that allows you to codify your findings and measurements. Ideally, the performance measurement system will become your one-stop shop for all information that is relevant to understand your performance and the value you provide to your clients. Through the logical links that exist between implementation quality/output indicators and value proposition/outcome indicators, the system will show the areas in which you are on track and in which you are not. Over time, the system will also allow you to set targets more realistically. It will help you maintain consistency and will become your frame of reference for all things related to performance and business analytics during the rollout and operations of your tech hub.
Keep it simple: Advanced software programs and intricate quantitative indicators are not always necessary. The software market is flooded with software solutions that are meant to help you track and document your indicators, with software packages including more and less sophisticated analytical functions. While good software can definitely be useful, you need to find out for yourself what level of complexity is most useful for you and your staff. The challenge will be that tech hubs are not typical businesses, specialized digital performance tracking tools and interfaces do not yet exist. Moreover, learning to use and maintaining a sophisticated software package can cost a lot of time and effort and you will need to assess the usefulness of any given performance tracking software. When in doubt, keep it simple and accessible to others. Your tech hub will likely be unique and many lessons will not fit with standard analytical and documentation software. A master spreadsheet and embedded notes might be all you need and will make updates and access easier for others.

There is no recipe for how a performance measurement system should look. The process of setting it up is a valuable learning exercise in itself. You should avoid looking for "cookie cutter" approaches to performance measurement. You need to understand that you cannot outsource performance measurement, not even the design of the measurement system itself. In fact, this is often where software use goes wrong: managers put faith and effort into technology but forget about the basics and their responsibility. It is better to emphasize the system design process and also involve your staff in it than to slavishly follow complex software or scorecards. Start with your value creation goals and then define your indicators (as described in this toolkit), and you will be able to design and set up your own (simple) system. Easy-to-use and simple desktop/mobile software and online tools like spreadsheets, note-taking, mind-mapping, and content management systems are probably enough.

Establish a timeline and insert your measurements in intervals that are appropriate for each indicator. A performance measurement system inherently becomes more valuable over time. If you set up an accessible and focused system, you will increasingly gain knowledge that will be clearly documented in the system. Once you have a number of measurements, you will find that you can easily derive learnings by looking at and analyzing the evolution of important metrics. You will see that many indicators confirm your intuitions, but others might not, or they might make you aware of nuances, and you will be able to conduct ongoing self-checks. Naturally, you will need to consistently fill in your findings and results at specified time points for trends to become obvious. Which interval is most useful will again depend on potential legal or funder requirements, but also on a given indicator and the availability of data or the reporting structure that your clients agree to. For example, you might find it useful and viable to track your client entrepreneurs' revenue for every quarter, while you might run a satisfaction and feedback survey only once a year. Bigger evaluations, for which you or an outside agency conduct multiple interviews and analyses to derive broader data, might only happen once every two years.


8.2.  Use Lean Start-up-style Hypothesis Validation to Extract Better Learnings

Continuous performance measurement and analysis are essential within a "Build, Measure, Learn" framework. Measurement, analysis, and learning for tech hubs are ongoing and should never be an afterthought. Large, holistic evaluations in two-year cycles or at the end of a particular programming period are useful to understand the bigger picture and derive in-depth knowledge. But such post hoc evaluations do not replace the need for ongoing learning. In the highly complex and uncertain innovation ecosystems that tech hubs interact with, there are hardly any tested and universally valid implementation and business models. This implies that a tech hub will likely perform worse than it should if it "flies blind" until a large-scale evaluation is conducted.

Your measurement should do more than tracking. It must also compare your findings with your projections and estimates. This kind of hypothesis testing will yield more meaningful insights as the thought process  that you go through when making your estimates will make you aware of the assumptions you make and beliefs you have about how your services relate to your value proposition.

Ideally, at the beginning of every time period and for every indicator, you can make crude estimates of the results at the end of the period. You can make one very risky or optimistic hypothesis and one very conservative one. When you see the results at the end of the time period, you will be able to validate or disprove the hypotheses and assumptions that you made when planning your services and your impact. This works both for quantitative and qualitative indicators.

Such data-driven hypothesis testing is very similar to the Lean Start-up methodology that is widely seen as a success factor for tech start-ups and other early-stage or small and medium-sized businesses. The basic idea behind the methodology is that business model and product innovations inherently advance into uncharted territory where evidence has to be collected to iteratively improve decision-making. At the same time, the method is particularly suitable for small start-ups with limited resources, as it emphasizes the need to keep business modeling and learning-oriented measurements agile and lean. In other words, in order to gain insights on its business, a venture does not necessarily need a large analytics department or build complicated statistical projections, but should instead rely on deliberate small real-world experiments that help validate or falsify hypotheses, which then leads to adjustments. Of course, this implies that a manager also has to be honest and admit when an assumption turns out to be false - these can actually be the most valuable lessons, leading to pivoting away from the wrong track.

While the method works particularly well for software and web-based start-ups where extensive data collection from users is easier, it can also be applied to tech hubs. Tech hubs also work in uncertain and complex environments where not much data is available and many implementation options and service portfolios are possible. Iterative, lean, and simple hypothesis testing, based on a short list of important indicators, is likely to be more helpful in maneuvering innovation ecosystems and improving a tech hub's value proposition.

"Get Out of the Building" to test hypotheses in interactions with your clients. Ultimately, you provide value to your clients - early-stage, growth-oriented mobile app entrepreneurs. This means that they are the main source of information that can help you understand what you are doing well, what you could do better, or what you should not do at all. Setting up a performance measurement system thus does not replace direct contact with your clients; instead the two are complementary: the information you get from your clients will feed into the indicators in your system, and the system, by providing you with documentation and an overview of all measures over time, will give you ideas for new hypothesis tests that you can bring into conversations with your clients. In other words, you need to continue to "get out of the building" to engage with your clients and understand and sense their demands and concerns. This is another key pillar of the Lean Start-up methodology (usually referred to as customer development).


8.3. Budgeting, Planning, and Staffing for Continuous Analytics

You will have to plan for resources and time for business analytics and maintaining your performance measurement system; usually a monthly in-depth analytics session is helpful. As you have seen from this toolkit, business analytics is an ongoing process and not a one-off, ad hoc initiative. This implies that you need to plan for resources and not underestimate the effort that this will take. You will develop a routine and rhythm when to delve deeper into the analytics. A monthly in-depth session that can, for example, precede strategy meetings with your advisory board or consortium is a good rule of thumb. In the hectic day-to-day schedule of an mLab or mHub manager, it will be tempting to skip these regular sessions.
 
However, mLab managers have reported that regularity is vital to consistently observing changes and progress and not getting lost in "putting out fires" and running from one urgent but overall minor implementation issue to the next. Your performance measurement system should also be set up to blend in with your daily work. You should frequently look up data and document notes and findings.

As a rule of thumb, you as the manager should work with one staff member who spends a significant amount of time on measurement, evaluation, and analytics tasks. It is usually a good idea to dedicate business analytics tasks to one person on your team to guarantee consistency and coherence. Business analytics functions usually go hand in hand with knowledge management, strategic advisory, and multi- stakeholder feedback management, so you might think about charting out the terms of references for this person so there are synergies with such tasks.

8.4. Involve Outside Help

Research and consultancy organizations can help you with in-depth evaluations, in particular, in the context of inflection points for your business model and strategy. Once you developed a performance measurement system that works for you, you should be able to stay on top of your regular tracking and learning exercises.
However, tech hubs have many indirect effects on the innovation ecosystem that are difficult to find and assess without a team of research and evaluation experts. In fact, the unexpected and initially unmeasured effects of tech hubs can be just as important as, or even more important than, the effects that are commonly anticipated and tracked (see box 13). For such broader evaluations that aim to get at the systemic and far-flung impact that your tech hub has, it is recommended that you engage an outside agency that has the required expertise.

Box 13: mLab East Africa Learning and Improving from Evaluation
mLab East Africa was the first mLab to get off the ground and also the first to conduct a comprehensive evaluation conducted by the University of Nairobi. While some findings were expected, others were surprising and shaped the mLab's future decisions. For instance, incubatees clearly demanded more personalized mentorship and Pivot East participants called for better follow-on support for finalists, which later became key factors for the design of
the mLab’s virtual incubation program.


Measurement of more complicated (for example, systemic) value creation and impacts is usually not something that you can do in-house. You should set the incentives in a way that the agency will give you a neutral and honest but constructive assessment.

It might help you to develop informal feedback channels to critical, informed outsiders, as they will help you to see new pathways and missing pieces in your business model. Just like start-ups benefit from having mentors, a tech hub manager is can also benefit from getting constructive but critical feedback from someone who is not on the team. It is natural that a manager or leader of an organization is not well- positioned to see certain issues in the organization, as it is easy to lose the distance that is necessary to reflect on the "bigger picture," that is, the broad strokes of strategic direction in the context of a given market and ecosystem. It would be advisable to identify someone on the fringes of the ecosystem as this will make it more likely that the feedback is relevant but still includes fresh ideas that are not already circulating in the ecosystem. Such inputs will help you to avoid complacency and biases.