Skip links
  • Who is iOCO?
  • Partners
  • Success Stories
  • Insights

Data leaders’ challenge: The growing data skills shortage

One of the major challenges faced by data leaders today is the daunting task of collecting and transforming raw data into analytics-ready data.

Moreover, the increasing volume and variety of data, along with the need to manage both cloud-based and on-premises data, makes this an increasingly complex and expensive undertaking. Additionally, data leaders are faced with doing more with less due to limited budgets that are not growing as fast as business requirements or data volumes.

Of course, all of this is exacerbated by a shortage of skilled knowledge workers and data professionals.

The pressure to make data-driven decisions has never been stronger, and organisations must find ways to optimise their data management processes to keep up with the rapidly-evolving landscape.

The Harvard School of Business (HSB) supports the view that businesses have long relied on professionals with data science and analytical skills to understand and leverage information at their disposal. It confirms that with the proliferation of data, due to the development of smart devices and other technological advancements, this need has accelerated.

HSB highlights the fact that it’s impossible to choose a single data science skill that’s most important for business professionals, but notes one fact that all data leaders will support is that insights are only as good as the data that informs them.

The pressure to make data-driven decisions has never been stronger.

It says this means it’s vital for organisations to employ individuals who understand what clean data looks like and how to shape raw data into usable forms – HSB calls it data wrangling.

Also known as data cleaning, data remediation, or data munging, data wrangling refers to a variety of processes designed to transform raw data into more readily used formats.

The exact methods differ from project to project depending on the data being leveraged and the goal this is expected to achieve. Examples are noted to include:

  • Merging multiple data sources into a single dataset for analysis.
  • Identifying gaps in data (for example, empty cells in a spreadsheet) and either filling or deleting them.
  • Deleting data that’s either unnecessary or irrelevant to the project.
  • Identifying extreme outliers in data and either explaining the discrepancies or removing them so that analysis can take place.

This is where automation takes centre stage. Utilising out-of-the-box thinking and strategies can help to address these hurdles.

One approach is to embrace automation and apply it to areas such as data entry, validation, data mart creation (a smaller version of a data warehouse meant to be used by a particular department or a group of individuals in a company), and analysis, which can free up resources for other critical tasks.

The time and resulting cost involved in finding, moving, cleaning and preparing data for analytics can vary depending on various factors, such as the complexity of the data, the number of data sources, the quality of the data, the tools and technologies used, and the skill set of the people involved in the process.

Studies have shown that data preparation can account for up to 80% of the time and cost involved in data analytics projects. This highlights the importance of having efficient and effective data preparation processes in place to reduce the overall time and cost of data analytics projects.

There are various studies providing insights into the percentage of time and cost involved in data preparation. The International Development Corporation (IDC) reports that the 80/20 rule is alive and well in data management, confirming the breakdown of time spent on data preparation versus data analytics is woefully lopsided.

It reveals that less than 20% of time is spent analysing data, while 82% of the time is spent collectively on searching for, preparing and governing the appropriate data.

This is endorsed by other studies that highlight the fact that most data analysts spend only 20% of their time on actual data analysis and 80% of their time doing tasks of little business benefit, such as finding, cleaning and modelling data, which is highly-inefficient and adds little value to the business.

Artificial intelligence (AI) and automation promise to radically change this paradigm.

Luckily, the next generation of backend data pipeline technologies is making huge strides in reducing the laborious amount of man hours it used to take to process the data to be analytics-ready. This is all while utilising AI and machine learning to create augmented cloud data warehouses or lakes. This negates the need for large data teams with the associated costs, while also bringing in meaningful outcomes in record time without the huge wage/consulting bill.

In addition to adopting new technologies, organisations can encourage employees to avail of self-service by providing them with the tools and resources necessary to access and analyse data.

For example, AI-driven insights and natural language processing can make analysis much more accessible for the average business user. In fact, data literacy programmes are empowering business users to take control of their data.

This in turn reduces the load on data teams’ skilled resources and capacity to deliver the data pipeline to the business. This is showing good results also, as business users are able to get their reports faster and take action on what the data tells them.

Ensuring data quality (DQ) is also critical to the success of data-driven decision-making. Many analytical and data science projects have been hindered or completely scuppered, due to poor data quality.

New generations of DQ tools and processes are using AI and machine learning to ensure data is properly cleaned, validated and standardised in a much shorter timeframe and with much less manpower than previously required.

Guaranteeing DQ is no longer as mundane as it used to be, as some new technologies have built their tools from the ground up with AI at the heart. We are seeing greater data accuracy and trustworthiness in all organisations that have taken the leap to invest in this technology.

Finally, collaboration with external partners can help organisations access additional data sources and expertise, reducing the workload on internal teams, while enabling them to make informed decisions based on more comprehensive data.

Link to original article