Data visibility facilitates better, faster decisions to support your digital transformation strategy and goals
We live in a world inundated with data. For organisations with even a modest IT estate, the growth in the volume of data gathered from their systems, processes, customers and products has grown exponentially in recent years. Historically, IT data has been siloed because of separated, disparate systems and data generation and collection mechanisms. As such, the relationships and interdependencies of data within these segregated environments have been hidden at worst, and opaque or obscured at best.
With the growth of public cloud and the associated services it provides, the ability to break down silo walls and intermix datasets is greater than ever. Organisations are embracing these opportunities. However, with new operating models comes an even greater influx of data generated by new systems and services.
While it may have been the situation years ago that IT data was not robust and rich enough to fully support the needs of different stakeholders within an organisation, such as CFOs, CIOs, business unit leads, product leads, that’s no longer the case. We’re in a new era, where these stakeholders will have all the data they need to make better business decisions – and to make them faster. We’re now able to extract meaningful and insightful data from areas that were previously impossible or at least extremely difficult – complete asset discovery across hybrid environments and full visibility of all resources across the entire IT estate are now viable.
Curate, calibrate, and enable
The challenge now is to curate and calibrate this huge volume of data to enable IT leaders to be more agile and make better decisions faster. The ability to collect data on all aspects of an IT estate also means that we must now address important questions. How much data is too much? The answer is debatable. Is all this data relevant? Probably not.
In data analytics, while the quantity of data matters for the law of large numbers to kick in and make our findings statistically significant, the quality of the data is paramount. As the following figure from the Flexera 2021 State of Tech Spend Report shows, the top challenge IT professionals mentioned with regard to making forward-looking decisions was the lack of quality data. Note that they did not say there wasn’t enough data, but a lack of data in which they had confidence.
Better data supports better, faster decisions
As the saying goes, “you can’t manage what you can’t measure.” With the tools and services available for data collection in modern IT systems, it’s probably safe to say that we’ve figured out a multitude of ways to do adequate measuring. The situation is now shifting from a data collection question to one of data validation and data relevance.
History shows us that it’s possible to make bad decisions with good data. The opposite is true as well: it’s extremely difficult to make good decisions with bad data. For example, without the ability to normalise and “dedupe”, or eliminate duplicate copies from, a dataset, it may appear that more data points exist to support a decision than is actually the case. The result: strategic go-forward decisions may be made based on skewed or inflated results.
As more and more workloads are evaluated for deployment to the public cloud, this need for clean, accurate data becomes even more critical. An organisation needs to have more than visibility into what IT assets it has. It must have clarity about how those assets interact (or don’t interact), which are nearing end-of-support (EOS) or end-of-life (EOL), which would be out of compliance if they were moved or migrated to a different environment, and so on.
Having an exhaustive list of the “what” in an IT estate is important, but without the cleansing, normalising, deduping, and enriching of that data, good decisions will be hard to unearth from the mass of collected data. And without the enrichment and contextualisation of your data, not only will the ability to make good decisions be hampered, but the speed at which these decisions can be made will be throttled as well.
When it comes to data, more is not always better. More is good. But better is better. And better leads to faster.
Technology value optimisation tools
So how do we prevent the paralysis by analysis that can be caused by the overwhelming quantity of data resulting from measuring everything? The good news is that as the collection and storage tools for these vast datasets have evolved over time, so have the tools for inserting sanity into the chaos. Technology value optimization (TVO) tools, in particular, help organisations visualise their entire IT estate and make data-driven decisions that are aligned with organisational goals.
Select the tools and techniques that exist in the marketplace (from both cloud providers and third-party vendors) that allow you to sift through the mounds of data, extract the unique and relevant data points, then enrich those data points with the additional context (such as compliance information or EOL/EOS data) that will enable you to make good decisions with good data, quickly and with agility. Data normalisation from multiple sources can help you make data-driven decisions to align IT management to support improved business performance.
For greater insights into your own hybrid IT environment, from on-premises to SaaS to cloud, visit IT Visibility | Flexera
by Brian Adler, Sr. Director Cloud Market Strategy, Flexera
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543