Algorithmic Thinking for Data Scientists | Chinmay Kakatkar


How to Write Code that Saves Time and Space

Towards Data Science
Photo by Jose Castillo on Unsplash

Note: All example code snippets in the following sections have been created by the author of this article.

Algorithmic thinking is about combining rigorous logic and creativity to frame, solve, and analyze problems, usually with the help of a computer. Problems involving some form of sorting, searching, and optimization are closely associated with algorithmic thinking and often show up during data science projects. Algorithmic thinking helps us solve such problems in ways that make efficient use of time and space (as in the disk space or memory of a computer), leading to fast and frugal algorithms.

Even if the costs of storage and computing continue to drop in the foreseeable future, algorithmic thinking is unlikely to become any less important for data science projects than it is today for at least a few key reasons. First, the requirements of customers tend to outpace the capabilities of available solutions in many commercial use cases, regardless of the underlying complexity of data science pipelines (from data sourcing and transformation to modeling and provisioning). Customers expect tasks that take days or hours to take minutes or seconds, and tasks that take minutes or seconds to happen in the blink of an eye. Second, a growing number of use cases involving on-device analytics (e.g., in the context of embedded systems, IoT and edge computing) require resource-efficient computation; space and memory are at a premium, and it may not be possible to offload computational tasks to a more powerful, centralized infrastructure on the cloud. And third, the operation of industrial data science pipelines can consume significant energy, which can worsen the ongoing climate crisis. A firm grasp of algorithmic thinking can help data scientists build efficient and sustainable solutions that address such challenges.

While data scientists with computer science degrees will be familiar with the core concepts of algorithmic thinking, many increasingly enter the field with other backgrounds, ranging from the natural and social sciences to the arts; this trend is likely to accelerate in the coming years as a result of advances in generative AI and the growing prevalence of data science in school and university curriculums. As such, the following sections of this article are aimed primarily at readers unfamiliar with algorithmic thinking. We will begin with a high-level overview of the algorithmic problem-solving process, and then start to build some intuition for algorithmic thinking in a hands-on way by looking at a selection of programming challenges posted on HackerRank (a popular platform used by companies for hiring data scientists). We will also go over some helpful resources for further reading. Finally, we will briefly talk about the relevance of algorithmic thinking in the context of AI-assisted software development (e.g., using GitHub Copilot), and conclude with a wrap up.

The title of this section is also the title of a famous book, first published in 1945, by Hungarian-American mathematician and Stanford professor George Pólya. In How to Solve It (link), Pólya lays out a deceptively simple, yet highly effective, four-step approach that can be applied to algorithmic problem solving:

  1. Understand the problem: Frame the problem carefully, with due consideration to any constraints on the problem and solution space (e.g., permissible input data types and data ranges, output format, maximum execution time). Ask questions such as, “can I restate the problem in my own words?”, and “do I have enough data to implement a useful solution?”, to check your understanding. Use concrete examples (or datasets) to make the problem and its edge cases more tangible. Spending sufficient time on this step often makes the remaining steps easier to carry out.
  2. Devise a plan: This will often involve breaking down the problem into smaller sub-problems for which efficient solutions may already be known. The ability to identify and apply suitable existing solutions to different types of sub-problems (e.g., in searching, sorting, etc.) will come with practice and experience. But sometimes, additional creativity may be needed to combine multiple existing approaches, invent a new approach, or borrow an approach from another domain using analogies. Pólya gives several tips to aid the thinking process, such as drawing a diagram and working backwards from a desired goal. In general, it is useful at this stage to gauge, at least at a high-level, whether the devised plan is likely solve the specified problem.
  3. Carry out the plan: Implement the solution using relevant tooling. In a data science project, this might involve libraries such as scikit-learn, PyTorch and TensorFlow for machine learning, and platforms such as AWS, GCP or Azure for hosting and running pipelines. Attention to detail is crucial at this stage, since even small bugs in the code can lead to implementations that do not accurately reflect the previously devised plan, and thus do not end up solving the stated problem. Add sufficient unit tests to check whether the different parts of the code work properly, even for edge cases.
  4. Look back: The practice of “looking back” is an instinctive part of the validation phase of most data science projects; questions such as “did the new machine learning model perform better than the last?” can only be answered by collecting and reviewing relevant metrics for each experiment. But reviewing other aspects of the data science pipeline (e.g., the ETL code, test cases, productization scripts) and AI lifecycle management (e.g., level of automation, attention to data privacy and security, implementation of a feedback loop in production) is also vital for improving the current project and doing better on future projects, even if finding the time for such a holistic “look back” can be challenging in a fast-paced work environment.

Steps 1 and 2 in Pólya’s problem-solving process can be particularly difficult to get right. Framing a problem or solution in a conceptually logical and systematic way is often a non-trivial task. However, gaining familiarity with conceptual frameworks (analytical structures for representing abstract concepts) can help significantly in this regard. Common examples of conceptual frameworks include tree diagrams, matrices, process diagrams, and relational diagrams. The book Conceptual Frameworks: A Guide to Structuring Analyses, Decisions and Presentations (link), written by the author of this article, teaches how to understand, create, apply and evaluate such conceptual frameworks in an easy-to-digest manner.

Algorithmic Complexity

One topic that deserves special attention in the context of algorithmic problem solving is that of complexity. When comparing two different algorithms, it is useful to consider the time and space complexity of each algorithm, i.e., how the time and space taken by each algorithm scales relative to the problem size (or data size). There are five basic levels of complexity, from lowest (best) to highest (worst), that you should be aware of. We will only describe them below in terms of time complexity to simplify the discussion:

  1. Instantaneous: regardless of the scale of the problem, the algorithm executes instantaneously. E.g., to determine whether an integer is even, we can simply check if dividing its rightmost digit by two leaves no remainder, regardless of the size of the integer. Accessing a list element by index can also typically be done instantaneously, no matter the length of the list.
  2. Logarithmic: For a dataset of size n, the algorithm executes in log(n) time steps. Note that logarithms may have different bases (e.g., log2(n) for binary search as the size of the problem is halved in each iteration). Like instantaneous algorithms, those with logarithmic complexity are attractive because they scale sub-linearly with respect to the size of the problem.
  3. Linear: As the name suggests, for a dataset of size n, an algorithm with linear complexity executes in roughly n time steps.
  4. Polynomial: The algorithm executes in x^2 (quadratic), x^3 (cubic), or more generally, x^m time steps, for some positive integer m. A common trick to check for polynomial complexity in code is to count the number of nested loops; e.g., a function with 2 nested loops (a loop within a loop) has a complexity of x^2, a function with 3 nested loops has a complexity of x^3, and so on.
  5. Exponential: The algorithm executes in 2^x, 3^x, or more generally, m^x time steps, for some positive integer m. See these posts on StackExchange (link 1, link 2) to see why exponential functions eventually get bigger than polynomial ones and are therefore worse in terms of algorithmic complexity for large problems.

Some algorithms may manifest additive or multiplicative combinations of the above complexity levels. E.g., a for loop followed by a binary search entails an additive combination of linear and logarithmic complexities, attributable to sequential execution of the loop and the search routine, respectively. By contrast, a for loop that carries out a binary search in each iteration entails a multiplicative combination of linear and logarithmic complexities. While multiplicative combinations may generally be more expensive than additive ones, sometimes they are unavoidable and can still be optimized. E.g., a sorting algorithm such as merge sort, with a time complexity of nlog(n), is less expensive than selection sort, which has a quadratic time complexity (see this article for a table comparing the complexities of different sorting algorithms).

In the following, we will study a selection of problems posted on HackerRank. Similar problems can be found on platforms such as LeetCode and CodeWars. Studying problems posted on such platforms will help train your algorithmic thinking muscles, can help you more easily navigate technical interviews (hiring managers regularly pose algorithmic questions to candidates applying for data science roles), and may yield pieces of code that you can reuse on the job.

All example code snippets below have been written by the author of this article in C++, a popular choice among practitioners for building fast data pipelines. These snippets can be readily translated to other languages such as Python or R as needed. To simplify the code snippets, we will assume that the following lines are present at the top of the code file:

#include <bits/stdc++.h>
using namespace std;

This will allow us to omit “std::” everywhere in the code, letting readers focus on the algorithms themselves. Of course, in productive C++ code, only the relevant libraries would be included and “std::” written explicitly as per the coding guidelines.

When a Formula Will Do

A problem that initially seems to call for an iterative solution with polynomial complexity (e.g., using for loops, while loops, or list comprehensions) can sometimes be solved algebraically using a formula that returns the desired answer instantaneously.

Consider the Number Line Jumps problem (link). There are two kangaroos placed somewhere on a number line (at positions x1 and x2, respectively) and can move by jumping. The first kangaroo can move v1 meters per jump, while the second can move v2 meters per jump. Given input values for x1, v1, x2, and v2, the task is to determine whether it is possible for both kangaroos to end up at the same position on the number line at some future time step, assuming that each kangaroo can make only one jump per time step; the solution function should return “YES” or “NO” accordingly.

Suppose x1 is smaller than x2. Then one approach is to implement a loop that checks if the kangaroo starting at x1 will ever catch up with the kangaroo starting at x2. In other words, we would check whether a positive (integer) time step exists where x1 + v1*t = x2 + v2*t. If x1 is greater than x2, we could swap the values in the respective variables and follow the same approach described above. But such a solution could take a long time to execute if t is large and might even loop infinitely (causing a time-out or crash) if the kangaroos never end up meeting.

We can do much better. Let us rearrange the above equation to solve for a positive integer t. We get t = (x1 — x2)/(v2 — v1). This equation for t is undefined when v2 = v1 (due to division by zero), but in such a case we could return “YES” if both kangaroos start at the same position, since both kangaroos will then obviously arrive at the same position on the number line at the very next time step. Moreover, if the jump distances of both kangaroos are the same but the starting positions are different, then we can directly return “NO”, since the kangaroo starting on the left will never catch up with the kangaroo on the right. Finally, if we find a positive solution to t, we should check that it is also an integer; this can be done by casting t to an integer data type and checking whether this is equivalent to the original value. The example code snippet below implements this solution.

string kangaroo(int x1, int v1, int x2, int v2) {
if((v2 == v1) && (x1 != x2)) return "NO";
float t = 1.*(x1 - x2)/(v2 - v1);
return ((0 < t) && (t == (int) t)) ? "YES" : "NO";
}

Picking from Multiple Options

There may be several valid ways of solving the same problem. Having found one solution approach, trying to find others can still be illuminating and worthwhile; each approach will have its pros and cons, making it more or less suitable to the problem context. To illustrate this, we will look at three problems below in varying degrees of detail.

First, consider the Beautiful Days at the Movies problem (link). Upon reading the description, it will become apparent that a key part of solving the problem is coming up with a function to reverse a positive integer. E.g., the reverse of 123 is 321 and the reverse of 12000 is 21 (note the omission of leading zeros in the reversed number).

One solution approach (call it reverse_num_v1) uses a combination of division and modulo operations to bring the rightmost digit to the leftmost position in a way that naturally takes care of leading zeros; see an example implementation below. What makes this approach attractive is that, since the number of digits grows logarithmically relative to the size of the number, the time complexity of reverse_num_v1 is sub-linear; the space complexity is also negligible.

int reverse_num_v1(int x) {
long long res = 0;
while (x) {
res = res * 10 + x % 10;
x /= 10;
// Check for integer overflow
if (res > INT_MAX || res < INT_MIN) return 0;
}
return res;
}

Another approach (call it reverse_num_v2) uses the idea of converting the integer to a string data type, reversing it, trimming any leading zeros, converting the string back to an integer, and returning the result; see an example implementation below.

int reverse_num_v2(int x) {
string str = to_string(x);
reverse(str.begin(), str.end());
// Remove leading zeros
str.erase(0, min(str.find_first_not_of('0'), str.size()-1));
int res = stoi(str);
// Check for integer overflow
return (res > INT_MAX || res < INT_MIN) ? 0 : res;
}

Such type casting is a common practice in many languages (C++, Python, etc.), library functions for string reversion and trimming leading zeros may also be readily available, and chaining functions to form a pipeline of data transformation operations is a typical pattern seen in data science projects; reverse_num_v2 might thus be the first approach that occurs to many data scientists. If memory space is scarce, however, reverse_num_v1 might be the better option, since the string representation of an integer will take up more space than the integer itself (see this documentation of memory requirements for different data types in C++).

Next, let us briefly consider two further problems, Time Conversion (link) and Forming a Magic Square (link). While these problems might appear to be quite different on the surface, the same technique — namely, the use of lookup tables (or maps) — can be used to solve both problems. In the case of Time Conversion, a lookup table can be used to provide an instantaneous mapping between 12-hour and 24-hour formats for afternoon times (e.g., 8 pm is mapped to 20, 9 pm is mapped to 21, and so on). In Forming a Magic Square, the problem is restricted to magic squares consisting of 3 rows and 3 columns, and as it happens, there are only 8 such squares. By storing the configurations of these 8 magic squares in a lookup table, we can implement a fairly simple solution to the problem despite its “medium” difficulty rating on HackerRank. It is left to the reader to go through these problems in more detail via the links provided above, but the relevant example code snippets of each solution are shown below for comparison.

Time Conversion:

string timeConversion(string s) {
// substr(pos, len) starts at position pos and spans len characters
if(s.substr(s.size() - 2) == "AM") {
if(s.substr(0, 2) == "12") return "00" + s.substr(2, s.size() - 4);
else return s.substr(0, s.size() - 2);
}
else {
// PM means add 12 to hours between 01 and 11
// Store all 11 mappings of afternoon hours in a lookup table/map
map<string, string> m = {
{"01", "13"}, {"02", "14"}, {"03", "15"}, {"04", "16"},
{"05", "17"}, {"06", "18"}, {"07", "19"}, {"08", "20"},
{"09", "21"}, {"10", "22"}, {"11", "23"}
};
string hh = s.substr(0, 2);
if(m.count(hh)) return m[s.substr(0, 2)] + s.substr(2, s.size() - 4);
else return s.substr(0, s.size() - 2);
}
}

Forming a Magic Square:

Notice that, although part of the code below uses 3 nested for loops, only 8*3*3 = 72 loops involving simple operations are ever needed to solve the problem.

int formingMagicSquare(vector<vector<int>> s) {
// Store all 8 possible 3x3 magic squares in a lookup table/matrix
vector<vector<int>> magic_squares = {
{8, 1, 6, 3, 5, 7, 4, 9, 2},
{6, 1, 8, 7, 5, 3, 2, 9, 4},
{4, 9, 2, 3, 5, 7, 8, 1, 6},
{2, 9, 4, 7, 5, 3, 6, 1, 8},
{8, 3, 4, 1, 5, 9, 6, 7, 2},
{4, 3, 8, 9, 5, 1, 2, 7, 6},
{6, 7, 2, 1, 5, 9, 8, 3, 4},
{2, 7, 6, 9, 5, 1, 4, 3, 8},
};
int min_cost = 81; // Initialize with maximum possible cost of 9*9=81
for (auto& magic_square : magic_squares) {
int cost = 0;
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
cost += abs(s[i][j] - magic_square[3*i + j]);
}
}
min_cost = min(min_cost, cost);
}
return min_cost;
}

Divide and Conquer

When a problem seems too big or too complicated to solve in one go, it can often be a good idea to divide the original problem into smaller sub-problems that can each be conquered more easily. The exact nature of these sub-problems (e.g., sorting, searching, transforming), and their “part-to-whole” relationship with the original problem may vary. For instance, in the case of data cleaning, a common type of problem in data science, each sub-problem may represent a specific, sequential step in the data cleaning process (e.g., removing stop-words, lemmatization). In a “go/no-go” decision-making problem, each sub-problem might reflect smaller decisions that must all result in a “go” decision for the original problem to resolve to “go”; in logical terms, one can think of this as a complex Boolean statement of the form A AND B.

To see how divide-and-conquer works in practice, we will look at two problems that appear to be very different on the surface. First, let us consider the Electronics Shop problem (link), which is fundamentally about constrained optimization. Given a total spending budget b and unsorted price lists for computer keyboards and USB drives (call these K and D, respectively), the goal is to buy the most expensive keyboard and drive without exceeding the budget. The price lists can have up to 1000 items in the problem posted on HackerRank, but we can imagine much longer lists in practice.

A naïve approach might be to iterate through the price lists K and D with two nested loops to find the i-th keyboard and the j-th drive that make maximal use of the budget. This would be easy to implement, but very slow if K and D are long, especially since the price lists are unsorted. In fact, the time complexity of the naïve approach is quadratic, which does not bode well for scaling to large datasets. A more efficient approach would work as follows. First, sort both price lists. Second, pick the shorter of the two price lists for looping. Third, for each item x in the looped list, do a binary search on the other list to find an item y (if any), such that x + y does not exceed the given budget b, and maintain this result in a variable called max_spent outside the loop. In each successive iteration of the loop, max_spent is only updated if the total cost of the latest keyboard-drive pair is within budget and exceeds the current value of max_spent.

Although there is no way around searching both price lists in this problem, the efficient approach reduces the overall search time significantly by picking the smaller price list for looping, and crucially, doing a binary search of the longer price list (which takes logarithmic/sub-linear time to execute). Moreover, while it might initially seem that pre-sorting the two price lists adds to the solution complexity, the sorting can actually be done quite efficiently (e.g., using merge sort), and crucially, this enables the binary search of the longer price list. The net result is a much faster algorithm compared to the naïve approach. See an example implementation of the efficient approach below:

int findLargestY(int x, int b, const vector<int>& v) {
// Simple implementation of binary search
int i = 0, j = v.size(), y = -1, m, y_curr;
while (i < j) {
m = (i + j) / 2;
y_curr = v[m];
if (x + y_curr <= b) {
y = y_curr;
i = m + 1;
}
else j = m;
}
return y;
}

int getMoneySpent(vector<int> keyboards, vector<int> drives, int b) {
int max_spent = -1;
sort(keyboards.begin(), keyboards.end());
sort(drives.begin(), drives.end());
// Use smaller vector for looping, larger vector for binary search
vector<int> *v1, *v2;
if(keyboards.size() < drives.size()) {
v1 = &keyboards;
v2 = &drives;
}
else {
v1 = &drives;
v2 = &keyboards;
}

int i = 0, j = v2->size(), x, y;
for(int i = 0; i < v1->size(); i++) {
x = (*v1)[i];
if(x < b) {
y = findLargestY(x, b, *v2); // Use binary search
if(y != -1) max_spent = max(max_spent, x + y);
}
else break;
}
return max_spent;
}

Next, let us consider the Climbing the Leaderboard problem (link). Imagine you are playing an arcade game and wish to track your rank on the leaderboard after each attempt. The leaderboard uses dense ranking, so players with the same scores will get the same rank. E.g., if the scores are 100, 90, 90, and 80, then the player scoring 100 has rank 1, the two players scoring 90 both have rank 2, and the player scoring 80 has rank 3. The leaderboard is represented as an array or list of integers (each player’s high score) in descending order. What makes the problem tricky is that, whenever a new score is added to the leaderboard, determining the resulting rank is non-trivial since this rank might be shared between multiple players. See the problem description page at the above link on HackerRank for an illustrated example.

Although the Electronics Shop and Climbing the Leaderboard problems have difficulty ratings of “easy” and “medium” on HackerRank, respectively, the latter problem is simpler in a way, since the leaderboard is already sorted. The example implementation below exploits this fact by running a binary search on the sorted leaderboard to get the rank after each new score:

int find_rank(int x, vector<int>& v) {
// Binary search of rank
int i = 0, j = v.size(), m_pos, m_val;
while(i < j) {
m_pos = (i + j)/2;
m_val = v[m_pos];
if(x == m_val) return m_pos + 1; // Return rank
else if(m_val > x) i = m_pos + 1; // Rank must be lower
else j = m_pos; // Rank must be higher since val < x
}
if(j < 0) return 1; // Top rank
else if(i >= v.size()) return v.size() + 1; // Bottom rank
else return (x >= m_val) ? m_pos + 1 : m_pos + 2; // Some middle rank
}

vector<int> climbingLeaderboard(vector<int> ranked, vector<int> player) {
// Derive vector v of unique values in ranked vector
vector<int> v;
v.push_back(ranked[0]);
for(int i = 1; i < ranked.size(); i++)
if(ranked[i - 1] != ranked[i]) v.push_back(ranked[i]);
// Binary search of rank in v for each score
vector<int> res;
for(auto x : player) res.push_back(find_rank(x, v));
return res;
}

Resources for Further Reading

The problems discussed above give an initial taste of algorithmic thinking, but there are many other related topics worth studying in more depth. The aptly titled book Algorithmic Thinking: A Problem-Based Introduction by Daniel Zingaro, is an excellent place to continue to your journey (link). Zingaro has an engaging writing style and walks the reader through basic concepts like hash tables, recursion, dynamic programming, graph search, and more. The book also contains an appendix section on Big O notation, which is a handy way of expressing and reasoning about the complexity of algorithms. Another book that covers several essential algorithms in a digestible manner is Grokking Algorithms by Aditya Bhargava (link). The book contains several useful illustrations and code snippets in Python, and is a great resource for brushing up on the basics of algorithmic thinking before technical interviews.

When it comes to dynamic programming, the series of YouTube videos (link to playlist) created by Andrey Grehov provides a great introduction. Dynamic programming is a powerful tool to have in your arsenal, and once you learn it, you will start seeing several opportunities to apply it in data science projects, e.g., to solve optimization problems (where some quantity like cost or revenue must be minimized or maximized, respectively) or combinatorics problems (where the focus is on counting something, essentially answering the question, “how many ways are there to do XYZ?”). Dynamic programming can be usefully applied to problems that exhibit the following two properties: (1) An optimal substructure, i.e., optimally solving a smaller piece of the problem helps solve the larger problem, and (2) overlapping sub-problems, i.e., a result calculated as part of a solution to one sub-problem can be used without need for recalculation (e.g., using memoization or caching) during the process of solving another sub-problem.

Finally, the doctoral dissertation Advanced Applications of Network Analysis in Marketing Science (link), published by the author of this article, discusses a range of practical data science use cases for applying graph theory concepts to fundamental problems in marketing and innovation management, such as identifying promising crowdsourced ideas for new product development, dynamic pricing, and predicting customer behavior with anonymized tracking data. The dissertation demonstrates how transforming tabular or unstructured data into a graph/network representation consisting of nodes (entities) and edges (relationships between entities) can unlock valuable insights and lead to the development of powerful predictive models across a wide range of data science problems.

In October 2023, Matt Walsh, an erstwhile computer science professor and engineering director at Google, gave an intriguing guest lecture at Harvard (YouTube link). His talk had a provocative title (Large Language Models and The End of Programming) and suggested that advances in generative AI — and large language models, in particular — could dramatically change the way we develop software. While he noted that humans would likely still be needed in roles such as product management (to define what the software should do, and why), and software testing/QA (to ensure that the software works as intended), he argued that the act of translating a problem specification to production-ready code could largely be automated using AI in the not-too-distant future. By late 2023, AI-powered tools like GitHub Copilot were already showing the ability to auto-complete various basic types of code (e.g., test cases, simple loops and conditionals), and suggested the potential to improve the productivity of developers — if not remove the need for developers entirely. And since then, AI has continued to make impressive advances in delivering increasingly accurate, multimodal predictions.

In this context, given the subject of this article, it is worth considering to what extent algorithmic thinking will remain a relevant skill for data scientists in the age of AI-assisted software development. The short answer is that algorithmic thinking will likely be more important than ever before. The longer answer would first start by acknowledging that, even today, it is possible in many cases to generate a draft version of an algorithm (such as the code snippets shown in the sections above) using generative AI tools like ChatGPT or GitHub Copilot. After all, such AI tools are trained by scraping the internet, and there is plenty of code on the internet — but this code may not necessarily be high-quality code, potentially leading to “garbage in, garbage out”. AI-generated code should therefore arguably always be thoroughly reviewed before using it in any data science project, which implies the continued need for human reviewers with relevant technical skills.

Furthermore, AI-generated code may need to be customized and/or optimized to fit a particular use case, and prompt engineering alone will likely not be enough. In fact, crafting a prompt that can reliably generate the required code (capturing the prompt engineer’s tacit know-how and motivation) often seems to be more verbose and time-consuming than writing the code directly in the target language, and neither approach obviates the need to properly frame the problem and devise a sensible plan for implementing the solution. Tasks such as framing, planning, customizing, optimizing and reviewing AI-generated code for individual use cases will arguably continue to require a decent level of algorithmic thinking, coupled with a deep understanding of the intent behind the code (i.e., the “why”). It seems unlikely in practice that such work will be substantially delegated to an AI “copilot” any time soon — not least due to the ethical and legal concerns involved; e.g., imagine letting the object avoidance software of a self-driving car system be generated by AI without sufficient human oversight.

Algorithmic thinking can help data scientists write code that is fast and makes sparing use of computational resources such as memory and storage. As more and more data scientists enter the field with diverse backgrounds and lacking sufficient exposure to algorithmic thinking, this article takes a step towards filling the knowledge gap. By providing a high-level introduction and hands-on examples of the kind that often appear in technical interviews, this article invites readers to take the next step and extend their study of algorithmic thinking with various resources for further education. Ultimately, algorithmic thinking is a vital skill for data scientists to have today, and will continue to be a skill worth having in our AI-assisted future.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here