Office Hours

Hidden gems in AI deployments

Things that are critical to your success in deploying AI

Marc Nehme
Towards Data Science
6 min readDec 9, 2020

--

Photo by Michael Dziedzic on Unsplash

Since 2014, I’ve been leading and advising teams to design and deliver AI-powered solutions from concept to production. While the technology has evolved significantly, the recipe for delivery has largely remained the same. I want to share some best practices and critical aspects that are often underestimated or overlooked. These are the “hidden gems” that I have found to be critical to the success of a production deployment.

ROI & Success Criteria

The first thing both you and your customer need to understand and have complete clarity on is the purpose of the project. Yes, you need to define the use case, scenarios, personas, and more. But you need to answer several questions, such as: What are the actual outcomes the customer is looking to achieve? How will this help improve the customer’s business?

I have seen many AI deployments where the outcome was a technical success but a business failure. Hence, those projects were not successful. The technical aspects of the solution were working correctly but there was no quantifiable business value. The ROI could not be proven, for example, no additional revenue generated or little to no cost savings.

That is why it is critical to understand and define success criteria for any project. This should be created and agreed to, by both parties, before you start. This helps ensure that the outcome is quantifiable and you can definitively state if the project was a success, or not. Here are some examples of success criteria statements:

  • “Meet an accuracy level of 70% or above for all functional tests, using an agreed upon test rating system”
  • “Increase an employee’s efficiency rate by 30%”
  • “Improve the system processing time by at least 20%”

Scope

This is another item that needs to be clearly defined and documented. Every project needs a scope — that is, what is supposed to be implemented during the actual delivery of the project. This captures what is “in bounds” and also should outline some related or common “out of scope” items. Here are some examples:

  • “Will integrate with one PostgreSQL db to extract all financial data”
  • “Integration with any other data source, other than PostgreSQL, is out of scope”
  • “Will not exceed modeling more than 200 unique answers”

Documenting your scope will help you avoid chasing a moving target. This allows you to state that you have implemented everything promised in the original scope of work. This also will help identify and avoid scope creep — where new items are added to the project during delivery.

Understanding the Requirements

This is another very critical part of the process. You must listen to your customer, listen to their challenges, listen to where they want to be. Not all customers are knowledgeable in the AI space. Their experience can range from having some hands-on experience to knowing nothing at all. That is why you are in the picture — to help guide them along this journey!

Document your requirements. Mutually agree to those requirements. Show how those requirements map to the solution’s use case, scope and success criteria. You can partition the requirements into two main types, functional and non-functional. Below are some examples.

Functional requirements:

  • “Must support all products mentioned in the Form 101 document”
  • “Must provide a confidence level with each answer”
  • “Must provide a feature to allow the end user to speak to a live agent”

Non-functional requirements:

  • “Must support 1000 concurrent users”
  • “Must support mobile browsers (Chrome, Safari) and desktop browsers (Firefox and Chrome)”
  • “Must provide a response to users within 5 seconds or less”

Without clearly defined requirements, you risk misalignment with your customer. You risk having a lack of clarity, both internally and externally. You need to understand the requirements to generate epics, stories and tasks to assign the work to your teams and to track progress.

ROI, success criteria, scope and requirements can be discussed and vetted via design thinking sessions (or other focused sessions). But they do not end there. They need to be formalized into mutually agreed upon documentation.

Photo by Gabrielle Henderson on Unsplash

Data

In order to properly solution any AI-powered deployment, you need to see and understand the data that you will be working with. Always ask for samples of the actual data that will be used as early as you can. Assess the data. There are many questions you need to answer, such as:

  • Does your customer have access/privileges to all the data you need?
  • Are there any security hurdles or compliances, on either end, that need to be worked through?
  • Is the data ready “as-is” to be used for your project? If it is not, then what needs to be done to get the data into a consumable format?
  • Do you have “enough” data? That is a very subjective concept and there is no one size fits all. You need to assess to make sure you have a proper distribution of the data?

I’ve seen many projects that have been delayed, failed or never took off due to data issues. This is definitively a showstopper so make sure you have assessed the data!

The Solution

A common mistake when starting a project, is automatically assuming the technology to be used. Do not do this. Do not go into a project with any firm assumptions of which technology you want to use. First, you need to completely understand what the customer is looking to achieve (all the things I have discuss above). Then, you can identify the appropriate technology in your architecture to meet the project’s needs.

The technology you propose should be used for its intended purpose. Most products are designed with specific use cases and certain data in mind. This can also affect what type of product support you receive. Make sure to understand the product limitations before incorporating it into the overall architecture. A deciding factor is, does the technology produce results that directly map to what my customer has asked for?

Playbacks

Put something in front of your customer early and often. Even if its just wireframes or very early development work. This will help save lots of time and can be the difference between a failed or successful project. The sooner you put something in front of your customer, the better. This will confirm your understanding of what you need to be building. It confirms if you‘re on the right, or wrong, path. Playbacks can be at held at whatever interval you choose. Two weeks is a typical interval most of my teams have used successfully.

Testing

You want to test your solution as early as possible. You also want to ensure that the people testing your solution are the real end users (or the closest thing to it). For example, let’s say your end user persona is targeted to be new employees only. Do not have SMEs or seasoned employees be the primary testers of your solution. It is not representative and will not be providing you contextual feedback. While it is important for those experts to test and provide their feedback, they will not be the ones using the solution. At the end of the day, it is the new employee persona that will be using this solution. Their opinion should carry more weight.

The number of test cycles you go through will depend on various factors such as project timeline, quality of results, scope, etc. In my teams’ experience, we have found that on average, three full test cycles (no less) are both effective and realistic with all things considered. The more you test, the more you can enhance the solution. You will eventually hit a point of diminishing return.

The Bottom Line

There are many different aspects to making an AI project a reality and success. I did not discuss every single aspect above but I have shared my experience and lessons learned for some of the most critical activities. These are the areas that I have seen to be the most common places where teams have struggled and succeeded. Ultimately, how you approach and complete these activities will dictate the level of success of your projects.

Photo credit — Unsplash

Marc is the CTO for the IBM Watson AI Strategic Partnerships organization. When not leading the technology strategy and vision for IBM Partnerships, Marc enjoys DJing, playing video games and wrestling.

--

--

Tech guy living the dream, AI enthusiast, helping scale AI across the globe, making things real - MarcNehme.com