September 30, 2023

AI Product Management - Fundamentals

As the AI product landscape flourishes, businesses and careers are seeing unprecedented growth. Dive into this guide on AI technology and product management to position yourself at the forefront of this dynamic sector.
AI Product Management - Fundamentals

Driven by the astronomical surge in computational power over the last decade, AI has firmly entrenched itself across sectors – from healthcare and finance to retail and travel. But is it all just hype, or does it truly offer tangible value to businesses?

Let’s embark on an exploration. For emerging AI product managers, this comprehensive guide delves deep into AI’s nuances, demystifying its types, applications, challenges, and the road to mastery in managing AI-driven products.

AI & Machine Learning: Building Blocks of the Future

Artificial intelligence encompasses the broader concept of machines simulating human intelligence to execute tasks. Common interactions with AI, often unnoticed, include voice assistants like Siri and Alexa or customer service chatbots on websites.

Most commercial AI applications rest on the foundational pillar of machine learning (ML). But what is ML?

Conversely, machine learning is a subset of artificial intelligence. It equips AI with the capability to "learn" from data patterns autonomously, without explicit human intervention. The burgeoning amount and intricacy of data, which is beyond human capacity to handle, has both expanded the possibilities for machine learning and amplified its necessity.

Let’s consider Netflix as an example. How does it so accurately recommend shows? It employs ML models that track group users based on user behavior, understanding viewing habits and preferences, and then offers content tailored to the viewer.

Let's look at some of the standout applications of Machine Learning:

  • Ranking and Recommendation:

Example: YouTube suggests videos based on your viewing history, enhancing user experience. Shopping websites can highlight productss thatt you might like.

  • Classification:

Example: Email systems like Gmail classify messages as spam or not based on numerous factors.

  • Regression:

Example: Real estate platforms might predict property prices based on historical data, sq meters, neighborhood, number of bedrooms and various influencing factors.

  • Clustering:

Example: Marketing platforms cluster audiences based on browsing habits, and they can understand that you are waiting for a baby based on your behavior, and it can do it automatically.

  • Anomaly Detection:

Example: Fraud detection systems in banks flag unusual transactions, enhancing financial security. One interesting factor, that usually we don't know what is consider anomaly, so system should be robust to "see" suspicious actions

  • Content Creation:

Example: Generative Adversarial Networks (GANs) creating art. These were brought to light in 2014 and can generate unique artworks that often rival human-made pieces. Nowadays ChatGPT, Midjourney and other models became very popular that use generative AI

  • Image/video recognition:

Example: FaceId in your iphone can unlock itself based on your face, even when you are in mask. Same functions exist when you can copy text from photos.

To power these AI models, vast amounts of data are essential. As these algorithms ingest data, they detect patterns or functions. These functions form the backbone of predictive models that inform and guide user experiences.

Unraveling the Types of Machine Learning: A Deep Dive

Machine Learning isn’t just a monolith; it branches into various types based on the nature of data and the end objectives. Here's a closer look:

Machine Learning types spreadsheet
Types of Machine Learning
Do we really need AI/ML?

The allure of AI has captivated the attention of businesses and individuals alike. It's become a trend for companies to inject "AI" into their branding or product descriptors, while CEOs frequently drop "AI" and "ML" buzzwords during investor meetings in hopes of attracting more interest. While there's no harm in this approach as a marketing strategy, misrepresenting a product's capabilities can lead to unfortunate repercussions, as seen when organizations face regulatory scrutiny from bodies like the FTC.

In product development, it's imperative for Product Managers (PMs) to adopt a strategic stance. Before jumping on the AI bandwagon, a PM should critically assess whether integrating AI truly aligns with the product's goals and offers genuine value. Blindly incorporating AI without a strategic foundation can not only diminish the product's worth but also negatively impact user experiences.

Any PM venturing into the realm of AI and ML must conduct a thorough evaluation of the project's Return on Investment (ROI). Given that the ML product lifecycle is more volatile compared to traditional software development, the associated risks are elevated. Consequently, AI projects should be approached with a heightened level of caution and a higher discount rate. While determining the cost-benefit analysis and estimating ROI in AI projects can be challenging due to the intertwined components, insights from industry cases and competitor outcomes can be invaluable. PMs should adopt a structured approach: formulate hypotheses, set assumptions, construct a model, and clearly define success metrics and objectives.

Lets look at  Google PM James Smith's framework, to help PMs determine the applicability of ML for their projects:

Decision tree if company should use ML
Building ML product decision tree

PMs should be guided more by the tangible business impacts of an ML initiative rather than its novelty or complexity. The primary objective should always be to drive positive business results. Through various interactions, such as coffee chats and conference discussions, it's evident that many ML Product Managers have sometimes achieved greater business success by opting against ML integration. It's crucial to remember that incorporating ML invariably introduces complexity, and with complexity comes increased cost.

What surrounds AI (backend)

When we think of machine learning (ML), the immediate image is often of intricate algorithms and layers of neural networks - the heart of the ML model. However, to ensure that the model efficiently integrates into systems, offers value, and continuously learns, there's a substantial amount of code required beyond just the model's creation. Let's delve into the essential facets of code that play a pivotal role alongside the core ML model.

MLOps Principles
ML Ops structure from MLops.org site

  • Data Preprocessing:

Purpose: Raw data is often messy. It needs to be cleaned, standardized, and transformed to be suitable for training.

Typical Code Tasks: Handling missing data, normalization, encoding categorical variables, and sometimes augmenting data to ensure the model has enough variance for training.

Benefits: Ensuring that the data fed into the model is of high quality can greatly increase the model's accuracy and performance.

  • Feature Engineering:

Purpose: Enhancing the input data by creating new variables from the existing ones to improve model performance.

Typical Code Tasks: Generating polynomial features, creating interaction terms, or using domain-specific knowledge to create new variables.

Benefits: Well-engineered features can dramatically improve model performance and reduce the need for more complex models.

  • Pipeline Creation:

Purpose: Streamline the process from data preprocessing to model evaluation, making the workflow more reproducible and efficient.

Typical Code Tasks: Chain preprocessing steps and model training into a single, cohesive process using tools like Scikit-learn's Pipeline.

Benefits: Increases code maintainability and ensures consistent data treatment at every run.

  • Model Evaluation and Tuning:

Purpose: Ensure that the model is performing optimally and is not overfitting or underfitting. Moreover, model needs correction and retraining as performance decreases over the time

Typical Code Tasks: Cross-validation, hyperparameter tuning using grid or random search, and evaluation metrics computation.

Benefits: Ensures the deployment of the most optimal model, leading to more reliable predictions in real-world scenarios.

  • Model Deployment:

Purpose: Integrate the trained model into a production environment for real-world use.

Typical Code Tasks: Convert the model into a format suitable for production, expose it as an API endpoint, or integrate it within mobile or web apps.

Benefits: Realizes the actual value of ML models by making predictions available to end-users or systems.

  • Monitoring and Logging:

Purpose: Keep tabs on the model's performance in the real world and track any drifts or anomalies.

Typical Code Tasks: Log predictions, track model metrics over time, set up alerts for drastic performance drops.

Benefits: Allows timely interventions, ensuring the model remains reliable over time.

  • Feedback Loops:

Purpose: Continuously improve the model by retraining it on new data, especially where predictions were incorrect.

Typical Code Tasks: Set up systems to capture user feedback or real-world outcomes, and use this data in subsequent training sessions.

Benefits: Keeps the model relevant and adaptive to evolving data patterns.

So overall, your model will be just a small fraction of a work that should be done.

In conclusion, while the core ML model is undoubtedly the star of the show, the surrounding cast of code components ensures the show runs smoothly.

And we come to the next point:

Do we have a team for  AI/ML?

Many companies ambitiously aim to recruit Data Scientists or Machine Learning engineers who possess a wide range of skills, from pattern identification, model development, deployment, to dashboard preparation and maintenance. This expectation is often too high. A more pragmatic approach for companies, especially during the early stages, might be to engage contractors. While contracting poses the risk of knowledge attrition once the contractor departs, it serves as an effective strategy until the business case proves its merit. Starting with a singular data scientist or ML Engineer is another approach. However, companies will soon recognize the escalating demand for Software Engineers and IT (Operations) personnel, a rate that often surpasses the need for additional data scientists.

ML project team diagram

We can summarize team members and their skills required for the ML/AI project:

  • Data Scientists/Machine Learning Engineers:

Role: Develop algorithms, train, test, and refine machine learning models.

Skills: Statistical analysis, ML algorithms, programming (Python, R), deep learning frameworks.

  • Data Engineers:

Role: Manage and optimize databases to handle and query data effectively. Create robust and scalable data pipelines.

Skills: Database systems, ETL processes, SQL, programming.

  • Domain Experts:

Role: Provide insights about the industry or problem-specific knowledge, assisting in feature engineering and model interpretation.

Skills: In-depth knowledge of the specific domain/industry.

  • Data Analysts:

Role: Examine data to identify trends, conduct exploratory data analysis, and produce reports.

Skills: Data visualization tools, SQL, basic statistics, programming.

  • Software Developers:

Role: Integrate ML models into usable products, applications, or systems.

Skills: Multiple programming languages, web development frameworks, API integrations.

  • Infrastructure Engineers:

Role: Set up and maintain the infrastructure required for training and serving ML models, especially in cloud environments.

Skills: Cloud platforms, server management, container technologies.

  • Business Stakeholders/Product Managers:

Role: Define the project goals, provide resources, and ensure that ML projects align with business objectives.

Skills: Product management, business strategy, communication.

  • Quality Assurance/Testers:

Role: Evaluate the ML models and associated software for bugs or issues.

Skills: Testing methodologies, debugging, automation tools.

  • Scrum masters/Product owners:

Role: Oversee the project from initiation to completion, ensuring we prioritize and execute project efficiently

Skills: Project management, agile methodologies, communication.

Underestimating of this requirements bring us to another topic:

Why do ML models get built but not deployed?

This question partly inspired this article. I recently read that Gartner estimates that 85% of the built ML models never get deployed. I have taken a few classes where I have learned how to build a recommendations system and image classification models within a few weeks, but can I deploy those models and make them a service for people to utilize? Not so easy. This led me to dig deeper and then take a Machine Learning Operations (MLOps) course.

Teams and companies underestimate the technical stack required to deploy and maintain an ML product. A google paper pointed out that Machine Learning resource management, server infrastructure, monitoring, and data pipelines are much bigger pieces of an ML system. The image below from the paper shows that.


Diagram showing how much efforts are take by different compoents of ML project
For better understanding complexity behind ML, lets look into the process behind one more time:
ML/AI process map with frameworks that are used during development
MLOps

Wow. That is a lot.

Navigating the AI Terrain: Beware of the Pitfalls

AI product development is a vast and tumultuous terrain, riddled with pitfalls. Here are some challenges you may face:

  • Siloed Operations: AI teams, though specialized, often work in isolation, making it challenging for stakeholders to perceive their value. The antidote? Regularly showcase milestones to demonstrate progress and align with the organization's broader vision.
  • Data Biases: An AI model is only as good as its data. Biased or skewed data can lead to flawed outputs. Hence, always ensure that the data mirrors real-world scenarios. And when you even don't know that bias exist, it exists, because you train your model on data coming from real humans who inherit bias thinking. Shocking example of undesigned bias in ML model
  • Unforeseen Behaviors: Recall Microsoft's Tay? The AI chatbot that went rogue on Twitter. AI can sometimes behave unpredictably, which underscores the importance of building behavioral safeguards.

To learn about challenges and how to overcome them in more details you can read this article.

Crafting a Stellar AI Product Management Career

For budding AI product managers, the journey can be thrilling yet challenging. Here are some vital insights:

  1. Data: A Double-edged Sword: High-quality, unbiased data is rare. Always be vigilant about the data sources and their credibility. If you plan to gather data, you should have a really good plan how you will do it.
  2. Stay Nimble: In the dynamic world of AI, agility is key. Be prepared to pivot based on feedback or new insights. Never give up.
  3. Be prepared: Don't underestimate amount of work needed to be done.
  4. Stay Updated: AI is ever-evolving. Regularly update your knowledge, embrace new tools, and cultivate a robust communication strategy with your team.
  5. Learn basic statistics: Its a good idea to understand statistics. It will help not only with AI product management, but with Six Sigma projects and hypothesis testing. Good statistical foundation is a good thing to have for any PM working in scientific way. Basic statistical resources you can find in Library section of this site.

In conclusion, while AI presents myriad opportunities, it’s crucial to approach it with an informed and strategic mindset. Stay tuned for the second part of this series.

Conclusion

For AI product managers, understanding the fundamentals of AI, the intricacies of machine learning, and the associated risks is just the beginning. True success in this field stems from continuous learning, the ability to pivot when necessary, and maintaining a clear line of communication with all stakeholders

Related News

May 1, 2024
Now, battery companies have huge production investments and have started building manufacturing facilities in North America, increasing production capacity and recreating the full supply chain. Let's visualize this supply chain of Lithium batteries
April 25, 2024
In this article, we delve into ERCOT from 2010 to 2023; we explore how the inherent variability of renewable energy sources—dependent on factors such as wind speed and sunlight availability—affects grid reliability, particularly during peak demand.
April 19, 2024
The effectiveness of using leading and lagging indicators largely depends on the talent within the team. Critical thinking, market awareness, and the ability to interpret complex data are essential skills for product managers.