Register Now


Lost Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.


Register Now

Lorem ipsum dolor sit amet, consectetur adipiscing elit.Morbi adipiscing gravdio, sit amet suscipit risus ultrices eu.Fusce viverra neque at purus laoreet consequa.Vivamus vulputate posuere nisl quis consequat.

Unrealistic entry-level place expectations?

We’re presently interviewing for entry-level candidates for a knowledge scientist place with image-related functions. Most the candidates have had a masters qualification in ‘knowledge science’ and most have had CNN expertise listed on their CV. Nevertheless they haven’t been capable of convincingly reply the next questions:

* You point out CNNs in your CV; are you able to clarify what a convolution is?

They might speak about masks and ‘sliding’ however can not go into the small print of the mathematical operation.

* Are you able to clarify a Sort I and Sort II error? What’s a standard trade-off between the 2?

Full clean from all candidates. If I make clear false-positive and false-negative the very best candidates level out the significance of context however do not find out about trade-offs.

* Your CV lists scikit study, what’s kNN clustering? How may you identify the optimum okay parameter?

Most can clarify kNN, the very best candidates have recommended plotting the clusters with the info and eyeballing it nonetheless no-one has talked about elbow or silhouette plots.

Is it unreasonable to anticipate strong solutions to those questions?

Edit: I’ve fairly rightly bought criticism for complicated kNN and k-means. I ought to make clear that I’ve misrepresented a query from one other interviewer. She generally requested if the candidate may clarify kNN when it arose within the context of Nan alternative for knowledge cleansing. Different occasions she requested in the event that they knew about unsupervised studying and when k-means got here up she would ask how do you identify okay. She by no means requested about each throughout the identical interview which I now perceive was most likely intentional however I had assumed she was asking about the identical factor.

I must also make clear these have been meant as ‘stretch’ questions and the vast majority of the interview was questions like ‘are you able to inform us concerning the venture you’ve listed on the applying?’. However we felt to tell apart higher candidates it was essential to take them out of their consolation zone.

Comments ( 25 )

  1. While I never remember which is a Type I and which is a Type II error — I would be able to answer the question with better than a blank stare…the other stuff is greek to me (probably why you aren’t interviewing me for this job) — so I cannot answer that for you.

  2. For fresh Masters candidates yes I would be expecting grad-level answers to these questions which delve into the math/theory embellished with practical examples.

    The lack of theory-based understanding of convolution may be due to how they are covered in a whirlwind of diff layers/architectures in general deep learning courses where there is so much content and the focus is on practical use. More CV-focused programs however would cover it a bit more deeply. However since the job is CV/image based all candidates should have revised it before…

    Type I/II terminology may be less familiar as false pos/neg is more common in ML—should always be able to bring up recall/precision and explain those.

    Do not feel guilty about maintaining minimum standards to fill a high performing role, there’s just a lot of “junk” on the market in the past few years with DS bootcamps/masters everywhere. Maybe emphasise CV courses in the JD to seek out those who have done more training in CV

  3. The CNN question is nonsense trivia. Most people don’t remember the mathematical operations including many seniors. An intuitive understanding is fine.

    The other two are fine.

  4. I would reframe question 2 as explaining precision vs recall.

  5. In the manner stated, yes.

    It’s missing a lot of context and is rather trivial academic concepts that may not have been covered in a degree program – or possibly just forgotten. Type I, II error is probably a semantic issue. You’re hitting a brick wall when asking in the form of your company’s internal jargon (even though it is common). Some schools do not use that terminology and instead simply focus on the topic illustrated with confusion matrices and AUC ROC stuff. Some venture in to F1, accuracy, sensitivity, precision, etc as the terminology. Also, trade offs are context dependent here depending on application: curing cancer vs sending emails about a new promotion.

    Detailing convolutions is very degree topic and focus specific. Hey may very we be a qualified data scientists academically, just never got to take an in depth class on neural networks to know more than masking and whatnot from a few Kaggle competitions and some independent studying. This is the difference between data science focus, not quality of applicant. It’s also likely irrelevant unless your company is grant and FAANG sponsor funded to develop new and novel CV applications from scratch. Otherwise, you’re just using prebuilt libraries and all that is abstracted away behind pre-trained models or high level APIs.

    KNN, I mean maybe they should know that a little more. Inflection point calculation can be tricky, but using some form of inter/infra cluster distance metric can be helpful. However, there are many distance functions to use with KNN and sometimes these aren’t appropriate performance metrics. Also, unsupervised clustering with naive methods really does benefit from a human looking at the clusters in the end to determine if they are coherent and rational, or at least interpretable.

  6. kNN clustering is not a thing.

  7. At the end of my (objectively bad cs) undergrad last year I could have answered all of these ok except for the following:

    Type I vs Type II couldn’t remember which one is which but could explain confusion matrix and precision vs accuracy and maybe recall if I were super on it. Again just an issue of which measure is which.

    Math behind convolution. Never taught the math whatsoever, but could tell you about masks and padding and how to code the tutorial one on keras.

    If only all the entry-level jobs I applied to were that easy I would have been hired!

  8. I am a junior Data Scientist, I’ve used and trained many CNNs in the past, both from scratch and using pretrained models, but I don’t know how to answer your first question. It may be because I took my last class on CNNs more than 2 years ago.

    I’ve always had problems remembering which is the Type I error and which is the Type II error but if you asked me about precision and recall or the bias/variance threshold I wouldn’t have any problem.

    But even rocks in our field should know how to estimate the right K for KNN.

  9. The answer is yes, you are being unreasonable. this is an interview, it’s a basically a infinite number of things you could ask, but wow, they don’t know exactly the thing that you wanted to pop question ask them. You expect them to know ahead of time, from setup all the way to production. if you wanna go deeper, give them a list of topics you would like to discuss and let them get reacquainted with it. If they still mess up, then you can get a better idea. I keep my questions basic and try to assert their abilities for most of the interviews

  10. You are interviewing candidates to gauge whether they will succeed at the job right? In that case, you would want to ask questions to assess their understanding rather than hitting them with ‘gotcha’ questions.


    Q1. Your candidates probably know enough about CNNs to do the job (I hope). Rarely will someone need to implement a convolution from scratch on the job, but if they do then the details will be important. Not being able to explain convolution mathematically probably wouldn’t prevent them from succeeding at the job. Many people can probably get by without knowing the full details – or at least they might know the gist and can get by just fine. For an entry-level position I don’t think this should be expected, but for senior (or PhD prereq.) positions most definitely.


    Q2. Type I and Type II errors are more in the realm of statistical hypothesis testing. Should the candidate know fundamental statistics? Absolutely. However, I would say getting the jargon (edit: terminology) right is more important if you are dealing with A/B testing. What you are really after is whether your candidate understands the tradeoff between precision and recall. So how about ask them about confusion matrix metrics?


    Q3. To clarify, KNN is a supervised learning algorithm. K-means is a clustering algorithm. K is not the same here. In fact, I would be suspicious if someone told me K-means was the unsupervised version of KNN. They are similar in that they are non-parametric algorithms based on a distance metric, but they are completely different algorithms. Now about the elbow method and silhouette plots – these may be simple concepts but data science students rarely come across them let alone unsupervised learning in general. I can see why, there is rarely a practical use case for clustering as clusters more often than not do not appear clear cut like in textbooks. I wouldn’t expect a candidate to know about the elbow method but I would be impressed if they do.


    If you are looking for someone with a background more in statistics, I would interview someone with a Masters in statistics or mathematics. However, someone with a Masters in data science or computer science will not necessarily know the details that you are alluding to. Intermediate statistics and above is not a requirement in many of these programs (you can make a case for and against this).

  11. If none of your candidates can answer the questions then I think you know your answer.

  12. As a potential candidate for these DS entry-level positions, don’t think these questions are too hard.
    Can at least speak bring up some points if not directly answer.

    But also grads in my cohort were basically taught the programming implementation and limited on the math knowledge, so might be a little difficult for some

  13. You are asking fundamentally bad questions that are closer to trivia than functionally important. Type I and II error isn’t really meaningful in a anyone’s daily job as no one uses hypotheses tests or univariate p-testing. This is a distinctly different topic from precision recall false positive negative ROC curve. You gave them a bad hint that isn’t even right and probably confused the candidate more. The convolution question is also a bad question because convolution in neural networks is different from the convolution operation in other math/physics contexts. At a certain point people are going to treat certain nn layers as black boxes and probably will not be able to remember the exact details especially in a stressful interview. You should ask yourself what is every interview question trying to accomplish or answer. The first question is trying to answer how good is your classical frequentist stats background. The second question is trying to answer is how deep is your theoretical understanding of cnns. I think the question you should be asking for an image company is what is your applied understanding of cnns? A better question is to ask them about common neural architectures like unet or or resnet. I would also ask them about their deep learning framework familiarity and experience. Asking them about how they did hyperparameter tuning and learning rate schedules. Finally, You should also recognize about 50-60% of entry level data scientist hires have to be fired / let go bc they don’t perform well. People out of school with no real world work experience are always a major risk. The best thing to derisk them is to see them in a real work environment if you can. I think the best way to evaluate people out of school is to have them work 2 weeks paid for you and see how it goes.

  14. I really hope you’re not asking these questions just to find the right candidate to build dashboards for you

  15. The types of error and explanation of kNN are totally valid questions. I’d expect entry-level M.S. candidates to be able to answer them.

    The CNN question I don’t particularly like. In the context of computer vision, I’m fine with somebody that’s able to describe how convolution works mechanically and that it’s effectively acting as automated feature generation (e.g. creating the appropriate image filters).

  16. Sometimes people can look like a deer in the headlights when asked about topics point-blank. I always strive to get the best performance out of my candidates. Partly it’s selfish: I don’t want to waste time interviewing a ton of candidates when it’s not necessary. I don’t try to give them an academic quiz. If I were to create a situation in which someone who would otherwise know the answer was unable to do it during the interview, then I would feel like I have failed.

    If you aren’t already prompting candidates, please consider making the process more well-rounded by giving the candidate the opportunity to speak on the topic of their choice, e.g., “Tell me about a topic you are interested in” or “Tell me about a project you’ve worked on” and direct the follow-up questions to gauge depth of experience and understanding. I’ve had junior candidates struggle to explain a metric they’ve used, but I could tell whether they were on the right track or not. I think people are most comfortable talking about things they know.

    As others have pointed out, you’ve shown you are dangerous enough to ask questions, but not skillful enough to know when you’re mistaken. For cultural reasons or otherwise, a candidate might not feel comfortable correcting you on so-called “kNN clustering.” I think you meant k-means clustering. So, a candidate might struggle to explain a type of clustering they think exists (because you named it), but don’t know how to use (because it doesn’t exist). Honestly, how can you have expectations for candidates when you are making mistakes like this?

    If you want to ask candidates about ML topics, perhaps you could embed the questions in more concrete scenarios. For instance, “I have trained a classifier to predict a disease. How should I measure the results and what tradeoffs are there?” It might help get their gears turning and provide necessary context.

    I believe you can become a better interviewer if you make an effort. Posting about your experiences was a good first step.

  17. This seems like pretty basic stuff. I’d be able to write an essay at least on all the points you mentioned when I was a fresher. So no, it’s not too much to expect, but your expectations should correlate with the compensation you are offering. I am curious about the comp which you have in mind for these freshers?

  18. I have an MS in statistics and 1 year experience. I’m was in the top 20% bracket in our annual reviews (Not trying to brag, just tyring to give you an idea). My experience is more in NLP. Here is how I would do on this interview:

    CNN: About in line with masks and sliding and then mumbling. Although tbh, if the job posting says computer vision and I put CNNs on my resume, I’d probably vaugley be able to pull more about.

    Type I, Type II error. I could explain that. (Type one error is the worst because it’s when you didn’t think there was a tiger in the bush and there was one and now you’re dead. That’s why it’s #1).

    Knn I would talk about the algorithm in an abstract way but honestly can’t remember the math super well. I would know that you can measure in group and between – group variance as well, and also use elbowing to detect optimum N.

    If I was in your position, I’d probably want to see 2/3 good answers. Sure, maybe they forgot when of the questions, but they other 2 they had really good reasoning. And maybe when you fill in the math they forgot they are able to finish the reasoning of the problem.

    Just my opinion, but hope that helps.

  19. Hmm I’m still a student so not speaking from real-world/job experience but I’m quite happy to say that I can answer all of them bar the convolution part (I have some hazy memory of it but if I were to explain the nitty-gritty of the math I can’t do it on the spot w/o proper review). But NGL I was caught off guard doubting myself due to OP’s mixup of KNN & K-means.

    I personally think the questions bar the convolution is reasonable. And since this is a DS interview, for the Type I & Ii question I would “think” & immediately shift my answer in terms of Precision & Recall, along w/ their inherent trade-off nature. If I were u, I’d raise similar sentiment of concern.

  20. Sorry for this off topic question:

    do companies really build and deploy image classification models?

    What are their real world usecases?

    And who will be willing to pay for image classification?

    Someone help me understand this industry!

  21. Not a masters student, just have a bachelors in math and took a lot of data science adjacent classes in college. I feel like I could definitely give a basic explanation on Type I and Type II (tbh I forget which is which though) and kNN, including the elbow plot (but you would have to say “k nearest neighbors” and not “kNN” to jog my memory). I would be more prepared tho if I knew the interview was on basic stats and data science topics. Wouldn’t be able to tell you what a convolution is even though I’ve learned about it in class.

  22. If you interview a bad candidate, you interviewed a bad candidate. If you interview 5 bad candidates, that’s on you.

    It’s usually because you’re either not attracting remotely qualified candidates with your offer or your interview technique is getting the worst out of your candidates. I’m really not sure where I’m leaning based on what you’ve shared.

    On the one hand, any remotely qualified person can explain false positives and false negatives in a classifier. On the other, I have a suspicion that you’re a statistician and have a “language barrier” when talking about these things with people from a more ML background.

    I’d even wonder based on your “knn clustering” if you just don’t have as much knowledge as you think you do and are misunderstanding good answers. If you think that’s an unfair assessment of your knowledge based on one random fact, consider how that applies to the questions you’re asking.

    It’s hard to say but ultimately, it doesn’t matter for my main point. The point is, many bad interviews means it’s you/your company in the wrong because it is never everybody else.

  23. I think they’re entry level adequate. I’m a data scientist coming from a Mathematics degree. Those are things that you have to know if you’d studied the bare minimum of theory. Having said this, not many university courses of data science require strong theoretical notions to pass and graduate, and many focus only on practical aspects, scientific programming and assignments. So, I wouldn’t expect a fresh data science graduate to know the mathematical definition of convolution, or what measure space is. More problematic is the lack of knowledge of type I and II errors or what a KNN does, as you don’t need “math” to understand these concepts.

    Having said that, I don’t think it’s on only candidates’ fault, they share the “blame” with the actual courses they attended. Moreover if they didn’t see this in their university courses, they’ll likely never know such notions as you don’t actually “need to know them” in every day work.

    (Please don’t come for me. I love theory, and I know my theory. In my experience tho, I work with DS that are oblivious to many basic concepts but still manage to do their tasks well enough)

  24. I dunno man, I’m a big fan of open ended questions: discussing projects on their CV and why they chose the solution: why it failed or succeeded.

    If I need more detail and it aligns with the position, I’ll
    Pick it apart.

    I’m not here to haze candidates like they’re navy seals, I’m here to work _with_ them.

  25. I think the’re good questions personally. It depends how “entry-level”, but if you want them to work relatively independently on problems that don’t have a defined solution, they should have a working knowledge of those things (taking in good faith that the kNN questions were in reality comparable to a similarly worded k-means question). I maybe wouldn’t ask all 3 of them – just the one that is most relevant to the work you expect them to undertake, and drill down into that as far as you need to. But they’re pretty basic questions imo!

    I think “entry level” is tripping people up in the replies here. An “entry level” data scientist could be a completely fresh grad with a Bachelors in STEM and an online course to their name, with no real world experience as a Data Scientist, no DS projects, nothing – just a raw lump of brain-clay that will need to be hand-moulded by someone slightly more senior until they learn how to do stuff in the real world. I would expect these people to know literally nothing about Data Science and have to train them from scratch (like any new grad in any field).

    Or it could mean someone who is functionally a Data Scientist (perhaps they do personal projects, or have done DS-adjacent work in a Data Analysis context, or have done DS work during their Masters/PhD research, etc etc), but doesn’t have significant/substantial real world experience as a Data Scientist. That’s the other end of the scale, and I would expect these people to be able to work relatively independently with just a bit of coaching in how “things are done” in our organisation. In this case, I’d expect a working knowledge of those 3 things (though maybe not a full mathematical treatment of them – just enough to understand, interpret and contextualise the results).

    I’ve hired both, and they’re both very different profiles with different pay and responsibilities. But they’re both “entry level” in some way.

Leave a reply