Image Credits: David Malan / Getty Images
The 5 Machine Learning (ML) Essentials That Tech & Organizational Leaders Need To Know
* Originally published on Tech Crunch
5 machine learning essentials nontechnical leaders need to understand
We’re living in a phenomenal moment for machine learning (ML), what Sonali Sambhus, Head of Developer and ML Platform @ Square, describes as “… as the democratization of ML.” ML has become the foundation of business and growth acceleration because of the incredible pace of change and development in this space.
But for engineering and team leaders without an ML background, this can also feel overwhelming and intimidating. I regularly meet smart, successful, highly competent and normally very confident leaders who struggle to navigate a constructive or effective conversation on ML – even though some of them lead teams that engineer it.
This article is an effort to help bridge that gap and demystify ML for seasoned organizational leaders without ML background.
I’ve spent over two decades in the ML space. This includes working at Apple to build the world’s largest online App and Music store. As the Senior Director of Engineering, Anti-Evil at Reddit I used ML to understand and combat the dark side of the web.
For this piece, I also interviewed a select group of successful ML leaders including:Sambhus, Head of Developer and ML Platform @ Square, Lior Gavish, CoFounder @ Monte Carlo, and Yotam Hadass, VP of Engineering @ Electric.ai, for their insights. I’ve distilled our best practices and must know components into this set of five practical and easily applicable lessons.
1. The ML Recruiting Strategy:
Recruiting for ML comes with several challenges.
The first is that it can be difficult to differentiate machine learning roles from more traditional job profiles (such as Data Analysts, Data Engineering, Data Scientist) because there’s a heavy overlap between descriptions.
Secondly, finding the level of experience required can be challenging. Few people in the industry have substantial experience delivering production-grade ML (for instance, sometimes, you’ll notice resumes that specify experience with ML models but then find their models are rule-based engines rather than real ML models).
When it comes to recruiting for ML, hire experts when you can, but also look into how training can help you meet your talent needs. Consider upskilling your current team of software engineers into Data/ML engineers or hire promising candidates and provide them with an ML education.
The other effective way to overcome these recruiting challenges is to define roles largely around:
- Product: Look for candidates with a technical curiosity and a strong business/product sense. This framework is often more important than the ability to apply the most sophisticated models.
- Data: Look for candidates that can help select models, design features, handle data modeling/vectorization and analyze results.
- Platform/Infra: Look for people who evaluate/integrate build platforms to significantly accelerate the productivity of Data and Engineering teams, ETLs, warehouse infrastructure, CI/CD framework for ML
Again, consider the power of training – an engineer with the right curiosity and interest can with the right skills training become the ML expert you need.
Regularly engaging with industry advisors and academics is another way to provide the team with updates on the latest and greatest approaches to ML. Quality bootcamps can be a great way to upskill your teams.
2. Organizational Structure:
How to best structure the role of the ML team within the larger organization (it’s size and whether it sits vertically or horizontally) are significant decisions that impact the efficiency and predictability of the business and that should be guided by the stage and size of the company .
Early Stage: (< 25 members) At this size, a shared central team is the safest and quickest way to develop infrastructure and organizational readiness. In the early stage, your ML team should constitute 10-20% of the entire engineering team.
Mid Stage: (25-500 members) By mid-stage, it’s best to focus on vertically integrated teams. Lior Gavish, CoFounder @ Monte Carlo, is a huge fan of Vertical ML teams, “… because they have a huge advantage in terms of gaining deep understanding of the problem being solved”.
A vertical integration also allows for sustained focus and prioritization – which is needed since midstage ML projects tend to be longer and more uncertain.
Mature: (500+ members) At this stage, the business should create a separate ML platform/Infra team. For example, Square is a 2500+ engineering org with 100+ Data scientists/ML Engineers and 15+ ML platform/Infra Engineers. The ML teams are aligned with individual business units such as chatbots, risk/fraud detection etc. rather than specific technology. And they have a ML Platform/Infra team shared across other teams in the company.
Exception: The size of the team varies depending on how key ML is to the product and services being developed.
3. ML Pipeline:
Deploying and maintaining ML pipelines is not dramatically different from deploying and maintaining general software. ML knowledge is required around building, tuning, testing, verifying and versioning the model – as well as monitoring it.
The key steps to successfully build, deploying and maintaining ML pipeline are:
- Define a product problem and determine a fit for ML
- Refine datasets
- Know and isolate data issues vs Model drawbacks
- Test, debug and version your models
Using off the shelf software can be an incredibly effective way to reduce the cost and dependency on highly skilled and specialized ML engineers, but be careful of unintentionally creating a disorganized spaghetti solution suite that is difficult to maintain.
While the industry is nascent, tools like Databricks, AWS SageMaker, Tecton, Cortex and others will save time and resources. As far as platforms and libraries, there are many competing solutions in the market: TensorFlow, PyTorch, Keras, Scikit-learn, Pandas, NLTK, etc.
4. Metrics and Evaluation
The key challenge around ML is reliability. How can you be sure your model is performing adequately before it’s deployed? How do you monitor production performance and troubleshoot issues? The solution is pretty similar to software engineering: observability.
It’s critical to instrument the environment, monitor and track application performance. Yotam Hadass, VP of Engineering @ Electric.ai, recommends the book Building Machine Learning Powered Applications by Emmanuel Ameisen to understand how to do so.
A model that performs better than a baseline (where there is no ML) and is both stable and secure, should be good enough to take to production. As a framework, I would vouch for iteration over perfection.
Rolling out models under a feature flag is safe, and ensures that you can turn it off quickly before disaster hits! Ability to run multiple versions via A/B testing of the model in production will drastically increase the confidence in the new model and will guarantee an overall higher level of reliability.
A good dataset is a must. It should be one that is meticulously created and reflects production scenarios. Build a system that allows you to backtrack against historical datasets and compare with predictions made by previous versions of the model.
You need metrics and evaluation to address concerns around good models vs. bad such as:
- Usefulness to end-user
- Data security
- Stability of the model
- Practicality of the predictions and recommendations
- Ability to explain why a model made the recommendation it did.
5. Common Pitfalls
On first read, some of these pitfalls may seem like common sense but they are worth both reiterating and reflecting on since they can help guide your team to making the best decision during a critical moment.
Don’t:
- Apply ML to problems that aren’t a good fit for ML. Like straightforward sequence of steps or not enough data or data can’t predict observable outcomes
- Expect instant results: impactful ML takes patience and iterations to get solid results
- Focus on model success metrics – without enough attention to product success metrics
- Underestimate the tooling and infrastructure costs leading to slow engineering progress
Within the last decade, ML has established itself as a technology accelerator. It’s critical in driving automation and bottom line profitability and growth. This necessitates the need for leaders to know and embrace ML and keep up with the lightning speed advances in ML technology.
Integrating ML teams effectively into the business starts with an understanding of what makes the right candidate and how to structure the team for maximum velocity and focus.
Leaders should focus on guiding the team to build end-to-end models with integrated observability and monitoring – before the models hit production. Evaluate models based on product success not model success. Avoid common pitfalls under high stress situations by being intentional about monitoring for them and proactively engage industry experts and academics to help keep the team up to date on the latest developments.