Editor's Note: Miss part one of this piece? Check out the digital magazine version here, or the plain text version here.
    Testing and Implementation should follow a multi-phase approach
    • All technology components should undergo two levels of testing; Technical Feasibility to validate the basic technical function of the hardware/software, typically performed in a lab environment; Operational Feasibility to verify that the technology will perform within your operation, typically performed in a field/line operations location (building, conveyor, vehicle, etc.) by a core team member, confirming the ability to function in target environment 
    • The complete system testing should also take place in a minimum of three levels; Alpha Level Test (four-six weeks) to validate the operation of hardware, software and new business processes, first in-house then in a very limited live test in a field environment, typically be core team members; Beta Level Test (four-six weeks), performed at a single field location, at first with core team members assisting, then scaled to entire location, full target population, managed by field operations staff, finally scaled to a second larger, less “friendly” location with no intervention from core team. 
    • Pilot Level Test (six-eight weeks), to scale system deployment to an entire, self-contained business unit (e.g., 15-20 sites). This allows you to validate business unit level reports and systems, validate the training plan (typically “Train-the-Trainer”) and that the solution performs as expected/desired. 
    8. Upon successful testing, full system deployment should follow a controlled process
    • A “Big Bang” approach (all locations, all users on a single day) should be avoided unless impossible to do so (some notable examples include payroll changes, etc.)
    • Ideally full deployment will follow the organizational structure of the company
    • Most field operations are organized into business units. A best practice calls for a “Shot Gun Start” where all business units begin on the same day, deploying to one or two locations in each business unit. Each business unit will follow a defined schedule that ensures completion by a given end date. 
    • Of course monitoring the on-going status of the deployment will be critical to finding any unidentified problems. The suggested rigorous testing and deployment methodology make this very unlikely. The project team must be vigilant to identify any un-encountered condition that could jeopardize deployment. Reporting and monitoring of Success and Accuracy metrics will be critical.
    • Key to the “Shot Gun Start” methodology is a decentralized project management structure. Each business unit should have a dedicated PM, ideally a field operator on special assignment, to manage the deployment for that business unit. This PM will ideally have been on the implementation team for the business unit Pilot Test Team. Having an operator manage the deployment for a business unit will ensure ownership by the ops team.
    • The final step in successful deployment is the transition to long term support. New equipment must be budgeted and deployed for growth and replacement; new users must be trained with the same level of rigor and competence as the initial deployment. Support groups that assume core team responsibilities must be identified and trained with a firm cut-over date. A post launch assessment should be performed and any lessons learned documented and communicated
    What are Success and Accuracy Metrics?
    • Most teams will (or at least should) identify key metrics to validate that the solution is performing as intended, which might include validating data collected is accurate, complete, timely, etc. They might also include some rudimentary assessment of scalability. Most of these metrics are “Point of Consumption” type metrics and measure and assess at the immediate moment of collection, transmission, action, etc. They clearly consider accuracy, integrity, precision, cycle time, timeliness, etc. But they are typically myopic in their perspective.
    • Success and Accuracy metrics take a broader, “system” view. They take a comprehensive view of each system impacted by any software/hardware deployment and typically are rigorous in the type of measures (time, accuracy, integrity) for an entire system. They also look at all upstream and downstream processes. Many systems are inter-related and difficult to precisely define and measure using a single metric; a broad group of key operating metrics, financial metrics (revenue, profit, cost, etc.) and performance metrics provide the best assessment of any unintended consequences or impacts . 
    • Most companies use a service or quality index composed of a series of operating metrics, and a group of financial metrics trended each day, week, month, etc. These comprehensive metrics flow through the core systems of a company and typically touch every critical system and process of the company. As such they provide the perfect S&A metric base. 
    • These metrics should be trended for a period of time prior to deployment and a period of time post deployment for any deviations to ensure there are no unintended consequences from the project
    Whew! I finally completed the deployment, transitioned all responsibilities to support teams and completed the post launch assessment, now I’m DONE, right?
    • Not quite! If you own the “technology” piece, you now have to plan for refresh! We call this Technology Lifecycle Planning. Every technology solution/platform will face replacement/retirement. 
    • Implicit in the decision to implement a major automation initiative is the acceptance of the inevitable upgrade cycle, which has significant impacts on capital expenditure forecasting, costing, etc. 
    • What is involved in Technology Lifecycle Planning? First and foremost, it requires a clear understanding and honest dialog with your supplier, how long will they manufacture and support your selected device? This begs questions like when must you do a “final buy” of a device platform, and what premium will you pay to support your device? 
    • A best practice in this space calls for a planned useful lifecycle of five to seven years. This is important for your financial planning and accounting team since you may be depreciating devices. This allows you to maximize training and familiarity of a single technology platform, but gives you the flexibility to accelerate replacement in the event of a critical technological innovation to benefit your operation or enhance your customer experience, or extend replacement in the case of an economic downturn that dictates a conservative fiscal plan
    • This also means that your technology development teams are now on a somewhat continuous, five year cycle. The end of deployment signals the post mortem, post launch assessment and lessons learned. Within the next 12 months the “requirements planning” process begins again for the next generation of product. Within the next 18 months the supplier strategy mapping to requirements begins in earnest with an eye to device evaluation in 24 months.
    How does this impact my business model, or more accurately my financial cost model?
    • Technology projects typically come with a significant capital expenditure. Of course they are cost justified as measured by any number of metrics like NPV, MIRR, break even, etc.
    • Most progressive companies are in tune with Total Cost of Ownership (TCO). Unfortunately, TCO typically looks at initial investment and support costs, not refresh/replacement. 
    • The problem arises five years later, when you need to convince your CFO and BOD to reinvest the same (or potentially more) capital to refresh the solution. No CFO alive will let you “reuse’ savings from a solution implemented five years ago. Similarly no CFO/CEO or BOD will tell you to scrap the technology and go back to manual metho.s
    • The problem is educating the CFO and financial planning team regarding the nature of the investment. Your financial justification probably involves things like labor savings, improved data availability which leads to improved customer experience which leads to increase/protected revenue, higher margin, etc. This of course off sets an existing cost that is currently modeled in your current business cost model. The capital/expense/depreciation is incorrectly interpreted as a one-time cost; it should rather be framed as displacing the labor cost, increasing productivity or new a revenue source.
    • Educating the CFO about the recurring nature of the technology investment will avoid a very unpleasant discussion in five years. Capital & expense forecasting and planning will accurately reflect the new, true costs of the operation. The cost model will be modified to recognize both the new cost and the new benefits. There will be no surprise for the CFO, CEO or BOD when it comes time to refresh the technology. 
    Adopting these principles will help ensure your company has the best chance for successfully implementing your new technology initiatives. It isn’t hard to do, but will require discipline and commitment. It is however, well worth the effort! Good luck!



    Roman Hlutkowsky, is the Founder and Principal at The Hlutkowsky Group, a consultancy that specializes in Business Process, Technology & Automation, and Enterprise Transformation. He has more than 27 years of experience in the transportation industry, mostly focused on applying technology to improve operations. Roman and his team take great pride in being on the cutting edge of emerging technologies and integrating it in ways that are beneficial to their customers.

    Follow