DIY Business Intelligence: Caution, Danger Ahead

Imagine if Galileo were able to plot the movement of the planets he observed via his telescope, record them in Excel, analyze the data with Crystal Reports, and produce an impressive dashboard that clearly showed the earth was not in fact, the center of the universe. Would the Roman Inquisition have been more receptive to his conclusions and spared him from charges of heresy? Perhaps this example is a bit extreme, but consider modern-day executives. Do they accept at face value the conclusions of business intelligence (BI) dashboards, or do they question the data, the analysis, and the conclusions, as the Roman Inquisition did of Galileo? Even in the age of information, BI is not something to be taken lightly, especially by those that try to do it themselves.

BI as a concept has been with us since Peter Luhn, renowned IBM scientist, coined the phrase in 1958. BI tools and platforms have been in abundant supply for many years. In theory, we should have all the intelligence we need to make insightful decisions and contin- ually improve our business performance. In practice, it’s not that easy, as I’ve discovered firsthand and observed in other businesses.

When I ran my own Web development business for the better part of the last decade, I wasn’t even aware of the term “business intelligence.” I did, however, know that there was certain information that I needed to manage my business in an effective manner. Since we were a small business (10 staff), it shouldn’t have been that difficult to collect data, analyze it, and draw conclusions about what worked and what needed changing. Or so it seemed.

CARING ABOUT BI

Initially I was one of three directors in the company. I was of the opinion that we needed to have regular man- agement meetings and that a key part of these meetings should be assessing our performance. This was not the consensus view. My fellow directors believed all that mattered was that we were bringing in more money than we were spending. They were right on one level — to survive, that’s all that really mattered. Where we differed was in our views on management. I believed it was important to examine our performance in order to understand where we could improve; they believed we just needed to make sure we covered our costs and made a bit of profit. The hurdle here was getting my colleagues to believe in a need for BI in the first place. It wouldn’t matter if I had the best executive dashboard in place if my fellow directors didn’t care about the information it presented. My first lesson in BI was that those in charge actually needed to want it; otherwise it served no purpose.

As time passed, my fellow directors moved on, and I was left solely in charge. Now I could gather as much business intelligence as I wanted. The difficult question this posed was, what did I need to know? It seemed like a simple question, especially for a small business, but it’s not that easy when push comes to shove. I decided to keep it simple; I was also lucky to have hired an ex-business owner who had a passion for statistics and was happy to compile as many reports as I wanted.

DATA COLLECTION AND ANALYSIS

Given that the work was mainly fee for service, there were only two types of data I needed to collect: time and money. As a small business, it wasn’t hard to ensure that we had a job code for each piece of work and that employees recorded their time against those codes. From this, we had a comprehensive data store on how much time we spent on any particular piece of work. This wasn’t due to great foresight on my part, but more a stroke of luck. As we charged for some work on an hourly basis, we needed to track the time spent. Fortunately the accounting package we used had a time-tracking system, and with a little bit of discipline, we were able to ensure all time spent by employees was recorded against individual job codes. Of course, this wasn’t a complete picture — getting my fellow directors to track their time was nigh impossible — but at least for the employees, I had a decent data store.

The key metrics I wanted to know were:

  • Cost per hour to run the business (daily/weekly/monthly)
  • Revenue generated per hour (daily/weekly/monthly)
  • Dollars per hour generated by client
  • Dollars per hour generated by project
  • Estimate versus actual by project

With these details, I would be able to tell if I was making a profit on a particular client or project and also to see how I was performing against my estimates. From this I could determine how a particular project was progressing, if a particular client was profitable short and long term, and/or if it was OK to go over on a small project, knowing from previous history that I would make the money back on a large project for that client.

TAKING ACTION

I did get this business intelligence, and I did see that some clients and projects were more profitable than others. What was difficult was acting on this information. I had some clients that were, simply put, difficult to deal with. A task that would take four hours for one client would end up taking much longer for another client because they were hard to please, and there would be continual revisions. I believed in quality of service and would bend over backward to please clients. However, it was at a price that cost me, literally. The BI that I gathered over time told me exactly how much it was costing me to work for a particular client, but I found it extremely difficult to say no to them when they approached me with new work.

The other aspect that I found difficult was to make changes to projects midway when the intelligence gathered showed it was over budget. For example, on one project I budgeted 100 hours for the creative component, but through weekly reports I learned we had spent 150 hours on creative. I knew this was happening but felt helpless to change it. The client wasn’t sure what they wanted, while also being quite particular, so it took a long time to get the designs to a point where the client was happy. Even though I knew we had exceeded the estimated time, I didn’t feel that I could simply stop the work or ask the client for more money. But at least I knew the impact, and in future I put in place a practice that stated clearly that we would do three design reviews, and after that, further reviews would incur additional costs. This was fine in theory but still a challenge to implement — especially with design, as it is such a subjective area.

The upshot? I knew the information I wanted, I kept it simple, I had an effective means of collecting the information, and I had an established report that gave me the information I needed in a manner that meant something to me. Yet the final and most important aspect of BI, making decisions and acting on the infor- mation, wasn’t always that easy. And that’s for a small business! From working in larger organizations, I’ve discovered that getting to the point I was at was actually a luxury that I failed to appreciate at the time.

SCALING UP

It’s one thing to manage BI for a small firm where I had full control, the right information, and the means to analyze it efficiently. I discovered it was a much harder challenge for a larger business that had over 15 years of history without BI.

It was an eye opener to see that the simplistic thinking that beset my business was present in a company that was many times larger. The business was also in the Web space, providing similar solutions to my own business but on a larger scale. With hundreds of employees, you would expect the appropriate measures to be in place to monitor performance. This was not the case. It was another example of the “make sure we make more than we spend” mindset. I believe in the K.I.S.S. principle as much as the next person, but this was taking it to extremes. Clearly the business was surviving, but there was no way to tell where the money was being made and where it was being lost. This didn’t seem to matter until the company was acquired and the new manage- ment insisted on knowing more.

After 15 years of not bothering with the details, trying to get the basics in place was like trying to get the genie back into the bottle. The first problem was a lack of data. There wasn’t any. It wasn’t even possible to be sure how much a client was quoted for a piece of work, let alone if that’s what the company billed for it or how long it took to deliver.

THE HAZARDS OF HOMEGROWN SOLUTIONS

The first step was to put a time-tracking system in place. Rather than use an existing system, the company decided to build their own and to integrate it with a task-tracking system that was already in place. The theory was that it would mean the data would be integrated from day one. It was a nice idea but flawed in that the task-tracking system was not always used or was used in different ways. Nonetheless, the time tracker was built inhouse and deployed. Employees were expected to track their time.

It didn’t take long for the flaws to emerge. The time- tracking software was extremely buggy, a perfect example of the plumber’s tap being leaky. Release after release reduced the bugs, but the damage had been done; staff didn’t trust the time tracker and were reluctant to use it. Even when it became more stable, staff still grumbled and groaned whenever it was mentioned.

Another issue that surfaced was a lack of any guidance as to what was to be tracked, how many hours each employee was expected to record, and what happened if they had no task to record the time against. In theory, any work that was to be done required a task to be created in the task management system. For example, if the project manager wanted a developer to implement a new feature on an existing Web site, the idea was to create a task in the task management system that would have all the details of the work to be completed along with a unique ID. The task would then be assigned to the developer, who would then track the time it took to complete the task against that task ID. However, this didn’t always happen. Sometimes a project manager would simply ask a person directly to perform a task, and it would get done without anything being recorded. There were no rules to say that a project manager had to create a task or that developers should not do any work unless they had a task in the system to track their time against.

Some staff members simply didn’t bother to track their time. They didn’t care, didn’t see the need, didn’t trust the time tracker itself, or, in some cases, didn’t have access to it (i.e., if they were working offsite). There were no penalties for not creating tasks in the system, so it didn’t matter whether people did it or not.

After six months of trying to gather data, it was clear further measures were needed. Not enough thought and effort had been invested up front, and the results showed it. It wasn’t until clear targets were put in place that it became obvious how poorly the time-tracking system was working. The targets were simple. Each employee was expected to log a minimum number of hours a day, and those who didn’t showed up in the daily exception report. This was circulated to all team leads so they could then follow up with the individual staff members. After only a few weeks, approximately 90% of staff were recording their time consistently.

FAULTY LOGIC

The next challenge was to make use of the data and start generating meaningful reports. Alas, this was not to be. The first attempt was to combine the time data with the information stored for invoicing and calculate the dollar return per hour for that piece of work. On the surface this approach made sense, but it was beset by numerous hurdles.

The theory was pretty simple. In the invoicing system (also self-built), a job would be entered with a dollar value. This would be linked to a stage in the task management system where the tasks would be entered and staff would log time against the task. All that was needed was to add up all the hours for all tasks for the area that was linked to a particular job. Then, by dividing the dollar value by the number of hours, we could ostensibly tell what the dollars-per-hour figure was for that job. For example, a job would be created for $25,000. This would be linked to the stage “Creative Design.” If the hours tracked against tasks in that stage equaled 278, then the return per hour would be 25,000/279 = $89.60. On top of this, a traffic light system was used to indicate if the job was profitable. For example, if the return was higher than x, it was green; between x and y, it was yellow; and below z, it was red. All the jobs would then be displayed on a dashboard showing if they were green, yellow, or red.

The first problem was the connection between the invoicing system and task management system. Client service managers were responsible for entering the jobs into the invoicing system. A project would be broken into several jobs, each with its own invoice, to reflect the payment schedule as set out in the contract. Project managers were responsible for setting up the stages in the task management system. These would commonly reflect the different stages of the project (analysis, design, development, testing, etc.). What would happen is that the first job would be associated with the first stage, but the number of hours associated with that stage wouldn’t reflect the dollar value of the job. For instance, the first stage might be called “Project Initiation,” consisting of a kickoff meeting and confirmation of requirements, and only take 50 hours of the total 780 hours budgeted, or approximately 6% of the total. However, as a part of the payment schedule, the first invoice would be for 30% of the entire project value. In this example, the return for the first job would be over $1,000 per hour. This would mean the return for the second job would be adversely affected. There was a fundamental disconnect between the job and the stage that was not understood.

The next issue was that the number of jobs (or invoices) wouldn’t match the number of stages set up in the task management system. This would lead to hours for tasks for a particular stage not being attached to an invoice. Since the time for that stage would not be taken into account, it would, for all intents and purposes, be lost.

Another issue was the assumption that all work was done at the same hourly rate (used to then calculate if the project was red, yellow or green). Due to the different relationships with clients, different hourly rates were charged. Long-term and large clients were often charged a lower rate, making the results appear worse than they should have been because the wrong hourly rate was used. For example, a job for $15,000 divided by an hourly rate of $150 would allow for 100 hrs to be spent. However, if the client was on a rate of $120 per hour, that job should allow for 125 hrs to be spent. Using the $150 hourly rate would mean that the dashboard would show the stage being 25% over budget, when in fact it wasn’t over at all.

These problems weren’t the only ones. Initially the dashboard was only visible to client service managers; project managers had no idea that it even existed. Soon they learned about the dashboard, but no one explained to either the client service or project managers how the results were generated. All they were told was that the goal was to keep all jobs “in the green.”

The initial release of the dashboard was greeted with shock, as many jobs were “in the red.” The client service managers were expected to fix this, but it was the project managers who were in charge of the work. Not only that, but there was no explanation provided as to how the classifications of green, yellow, or red were calculated. After the initial shock wore off, client service managers started to learn that when an invoice was “red,” it didn’t necessarily mean there was a problem.

It could be that the time was recorded incorrectly, the data was faulty, the hours had been logged against the wrong area, the wrong area had been associated with the job, and/or the hourly rate was incorrect. On the other hand, some jobs could be designated green but actually be red, because once the job was invoiced, no further data was compiled even if there was still work outstanding. It didn’t take long before people started to ignore the dashboard, because it was clear that the underlying logic was faulty.

LACK OF UNDERSTANDING

The demise of the traffic light dashboard was inevitable. The logic behind it was fundamentally flawed. Even if the data collection were faultless, the results would still have been misleading, because the basic assumption of a one-to-one relationship between a job in the invoicing system and a stage in the task management system was wrong. It was clear that there had been no consultation with the project managers when the time-tracking system was being developed, as they would have pointed this out before the idea got off the ground.

The failure to implement meaningful reporting showed a lack of understanding of how the business actually ran. Those involved in working on the reporting didn’t understand how the projects were managed and therefore failed to see the flaw in their logic. The wrong peo- ple were involved in creating the business intelligence. Failing to involve the right people up front resulted in a dashboard that provided a false view of project profitability.

The correct approach would have been to take a step back and be clear on what metrics where required and make sure the right people were involved in working out how to gather and present those metrics. Not only would the results be better, but having the right people involved — those most likely to be affected by the metrics — would mean the system would have their buy-in from day one.

If we boil down the complexity of BI to its most basic elements, we can sum it up as follows:

Data collection — making sure you record the information required

Analysis and reporting — taking the data, analyzing it, and producing meaningful reports

Action — making decisions and taking action based on what the reports show

For my small business, the collection, analysis, and reporting were simple. I was in charge, I knew the business backward and forward, and I knew what information I needed. The action wasn’t as easy. In the larger business, they struggled to capture the data and produced incorrect reports based on incomplete analysis. Action — that was the final straw. After all the effort to get the time-tracking system in place, and to create the dashboard and make it visible to client service and project managers, it failed to result in any real action. Perhaps this was a blessing in disguise in some ways, as any action taken on incorrect information would likely cause more problems than it solved. What I found most disturbing, however, is that very little happened.

COMPOUNDING FAILURES

Before the larger company put any effort into BI, there was no way to tell which projects were profitable and which were costing the business money. After almost a year of working on setting up the traffic light dashboard, some information was available. It was clear (even with the faulty logic) that certain projects where extremely unprofitable. The result? Nothing of any significance. The projects continued to be unprofitable. There were no crisis meetings held, no reprimands, no remedial action — just the occasional review of a “red” project, only to find that the underlying data was suspect. There were two possible conclusions the business could draw from this. The first was that the project was in fact unprofitable and correctly identified as red. The second was that, due to an error in data collection, the project was marked red when it shouldn’t be. Even in the first situation, a truly unprofitable project, the business wasn’t sure what to do with the information. Once again, inaction was the result.

Clearly, doing BI yourself can be fraught with danger. Even on a small scale it can be difficult if the right mindset is lacking, let alone in a large organization where tasks become so specialized that one person can’t know everything about the business. Scale makes things harder, but that’s not the main issue: clarity of purpose is. Knowing what information you need is vital. Once you have that, it’s merely an operational issue to work out how to get it. That’s not the hardest part, though. Assuming the data is collected accurately, the analysis is sound, and the reports are meaningful, the last and biggest hurdle is acting on the information at hand. That’s what really matters.

Of course, this is easier said than done. One way to ensure action is taken is to make the information as public as possible. When the information is only in the hands of senior management, it’s easier to ignore and may not result in action. If everyone in the business knows what is happening, chances are there will be more than a few people who know what needs to be done and will do it. Let the people who are most involved in creating the data help to influence the outcome. Information is power, and keeping information in the hands of a few lessens the ability for that power to be exercised.

BI can have a profound impact, if done well. This requires collection of the right data, meaningful analysis, and, most importantly, action. Without action, the first two elements become irrelevant. Ironically, it’s not the last step that proves to be the biggest obstacle to DIY business intelligence. There needs to be a desire for knowledge, for the truth of how the business is performing, even if that truth is difficult to face. If, like the Roman Inquisition, management doesn’t want to know the truth, the people will, and they will act if given the chance. There is much to be gained through BI by tapping into the collective intelligence in every business. Beware, DIY business intelligence? Only if you try to go it alone.