Common Mistakes with Good and Bad Responses
#1 Jumping straight into the model
Some candidates jump straight to the ML algorithm they would use to solve the problem, without first articulating the business application, the goal of the solution, and success metrics.
Bad Response: “For fraud detection, I’ll use a deep neural network because it’s powerful.”
Good Response: “Will this solution be used for real-time fraud detection on every card swipe? This means we need a fast and efficient model. Let me identify all the data I can use for this model. First, I have transaction metadata like transaction amount, location, and time. I also have this card’s past transaction data — I can look up to 30 days in advance to reduce the amount of data I need to analyze in real-time, or I might pre-compute derived categorical/binary features from the transaction history such as ‘is_transaction_30_days’, ‘most_frequent_transaction_location_30days’ etc. Initially, I’ll use logistic regression to set a baseline before considering more complex models like deep neural networks if necessary.”
#2 Keeping it too high level
You don’t just want to give a boilerplate strategy but also include specific examples at each step that are relevant to the given business problem.
Bad Response: “I will do exploratory data analysis, remove outliers and build a model to predict user engagement.”
Good Response: “I will analyze historical user data, including page views, click-through rates, and time spent on the site. I’ll analyze the categorical features such as product category, brand, and remove them if more than 75% of values are missing. But I would be cautious at this step as the absence of some features may also be very informative sometimes. A logistic regression model can serve as a starting point, followed by more complex models like Random Forest if needed.”
#3 Only solving for the happy case
It is not hard to recognize a lack of industry experience if the candidate only talks about the data and modeling strategy without discussing data quality issues or other nuances seen in real world data and applications.
Bad Response: “I’ll train a classifier using past user-item clicks for a given search query to predict ad click.”
Good Response: “Past user-item clicks for the query may inherently have a position bias as the items shown at higher positions in the search results are more likely to be clicked. I will correct for this position bias using inverse weighted propensity by estimating the click probability on each position (the propensity), and then weighing all the labels with it.”
#4 Starting with the most complex models
You want to show bias for action by using easy-to-develop, less costly and time consuming, lightweight models and introducing complexity as needed.
Bad Response: “I’ll use a state-of-the-art dual encoder deep learning architecture for the recommendation system.”
Good Response: “I’ll start with a simple collaborative filtering approach to establish a baseline. Once we understand its performance, we can introduce complexity with matrix factorization or deep learning models such as a dual encoder if the initial results indicate the need.”
#5 Not pivoting when curveballs are thrown
The interviewer may interrupt your strategy and ask follow up questions or propose alternate scenarios to understand the depth of your understanding of different techniques. You should be able to pivot your strategy as they introduce new challenges or variations.
Bad Response: “If we do not have access to Personally Identifiable Information for the user, we cannot build a personalized model.”
Good Response: “For users that opt-out (or do not opt-in) to share their PII or past interaction data, we can treat them as cold start users and show them popularity-based recommendations. We can also include an online session RNN to adapt recommendations based on their in-session activity.”
Response Calibration as per Level
As the job level increases, the breadth and depth expectation in the response also increases. This is best explained through an example question. Let’s say you are asked to design a fraud detection system for an online payment platform.
Entry-level (0–2 years of relevant industry experience)
For this level, the candidate should focus on data (features, preprocessing techniques), model (simple baseline model, more advanced model, loss function, optimization method), and evaluation metrics (offline metrics, A/B experiment design). A good flow would be:
- Identify features and preprocessing: e.g. transaction amount, location, time of day, and other categorical features representing payment history.
- Baseline model and advance model: e.g. a logistic regression model as a baseline, consider Gradient boosted trees for the next version.
- Evaluation metrics: e.g. precision, recall, F1 score.
Mid-level Experience (3–6 years of relevant industry experience)
For this level, the candidate should focus on the business problem and nuances in deploying models in production. A good flow would be:
- Business requirements: e.g. tradeoff between recall and precision as we want to reduce fraud amount while keeping the false positive rate low for a better user experience; highlight the need for interpretable models.
- Data nuances: e.g. number of fraudulent transactions is much fewer than non-fraudulent transactions, can address the class imbalance using techniques like SMOTE.
- Model tradeoffs: e.g. a heuristic-based baseline model, followed by logistic regression, followed by tree-based models as they are more easy-to-interpret than logistic regression using hard-to-interpret non-linear feature transformations.
- Talk through deployment nuances: e.g. real-time transaction processing, and model refresh cadence to adapt to evolving fraud patterns.
Senior/Staff/Principal level Experience (6+ years)
For this level, the candidate is expected to use their multi-year experience to critically think through the wider ecosystem, identify core challenges in this space, and highlight how different ML sub-systems may come together to solve the larger problem. Address challenges such as real-time data processing and ensuring model robustness against adversarial attacks. Propose a multi-layered approach: rule-based systems for immediate flagging and deep learning models for pattern recognition. Include feedback loops and monitoring schemes to ensure the model adapts to new forms of fraud. Also, showcase that you are up to date with the latest industry trends wherever applicable (e.g. using GPUs, representation learning, reinforcement learning, edge computing, federated ML, building models without PII data, fairness and bias in ML, etc.)