ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Data unchained: liberating data drives effective innovation

Linked InTwitterFacebook

John Capello at Nasuni examines how innovation is being transformed by machine-generated data and the smart platforms that make it available

 

What makes successful innovation?

 

Broadly speaking, we can say there are two approaches. According to the first one, innovation flourishes when the best minds are concentrated in one place to solve intractable problems — in the manner of the scientists and engineers at the famed Bell Labs in New Jersey, or the mathematicians at Britain’s Bletchley Park code-breaking centre in World War II.

 

The second approach places particular emphasis on challenging teams’ assumptions which, unchallenged, can narrow creativity over time. In her ground-breaking book Closing the Innovation Gap, career innovator and former Cisco Systems CTO Judy Estrin, calls for a greater variety of thinking and problem-solving perspectives, to drive invention. She explains: “An overly inward focus can lead companies to make faulty assumptions in defining their products” and adds: “One way to broaden the outlook on a project is to encourage diversity of background and specialisation of project teams.”

 

A prime example of this dynamic problem-solving was the Xerox Park Innovation Centre in the US, famed for breakthroughs such as the laptop and PC mouse. What made Xerox’s set-up so successful was that their innovation teams came from widely-differing backgrounds — engineering, science, design, social sciences, and so on, as well as tapping many different types of research data.

 

Diversity of thinking and data

There is an exciting parallel with this powerful diversity of thinking and multiplicity of data sources emerging in today’s data-driven economy. It comes from the ever-widening range and availability of data for commercial firms and research bodies to harness for inventing new products and services.

 

Whether the information captured by a single piece of complex equipment or a live stream from millions of sensors in a smart city, our connected economy provides greater information sources than ever before, to fuel more effective and faster innovation.

 

The connected economy delivers on the second part of the promise by making data available locally for expert analysis and innovation programmes, as ever before. That is because today’s cloud-based storage platforms — which work with the three major cloud providers — deliver data flexibility and availability through effective global sharing and collaboration on massively-scalable data sets across global regions.

 

Data has been liberated from constraints of low-grade office IT and disparate corporate facilities that dogged companies’ efforts at data-driven research until very recently.

 

Freely-available data means that innovation programmes no longer have to be tied to the computing power of a head office or large manufacturing site. Instead, information can be accessed and analysed from smaller sites or specialists working remotely or at home. It is helping 21st century organisations build their own Xerox Park effect without having to build industrial-scale laboratories or set up entirely new sources of research data.

 

Three current cases show how data’s multiplicity and availability is changing companies, public bodies and third sector institutions’ ability to innovate in their daily operations or research programmes.

 

Surprising innovations

First, hardware manufacturers have transformed technical problem-solving by cleverly marshalling their supply chain data. Defect data from company manufacturing processes, once held in silos at different locations before it could be gathered and analysed, can now be captured through live streams, hosted in the cloud, and accessed for problem solving within hours by in-house engineering teams located in different places from the production and supply chain units.

 

The limitations of geography for global companies’ fault-finding and innovation loops — extended supply chains with multiple sites producing disparate and hard to transfer data sets — have been removed.

 

A similar unshackling is taking place in medicine. Following the imaging revolution, with scanning moving from 2D to 3D imaging, local healthcare bodies like community clinics, separated from well-resourced MRI and imaging centres, struggled to use this step-change to improve their diagnostic work and enhance patient care. In many cases, local clinics lacked imaging centres’ connectivity and storage capacity and had to buy more capacity to cope.

 

Many health services moved from on-premise infrastructures to flexible cloud storage as their place of ultimate storage.

 

However, with transformative edge connectivity and cloud storage, medical professionals can now share and work on the largest of images with colleagues, irrespective of their location or office size; they now “work in the smallest place but store their data in the biggest place.”

 

With larger files successfully managed and stored, local healthcare practitioners can analyse richer data that provides more insights, accelerate clinical decision-making, and provide services in new markets or geographies — without needing a big IT footprint or capital outlay.

 

The third, and most revelatory example, is modern data’s flexibility in academic research. Medical research institutions’ specialised microscopes routinely generate 100 terabyte scans of the human body, piling up one petabyte of data per microscope per month. With institutions having thousands of research microscopes, many have found it harder to move such data volumes around for research insights because they are tied to limited data transfer technologies.

 

Such arrangements also constrain researchers’ ability to search and access files or file sets, crucial daily processes for delivering innovation in medical research.

 

With the adoption of cloud based storage platforms by research bodies (even though data is being ingested from on-premise IT infrastructures tied to big hospitals and institutions), researchers can nevertheless access these data sets from any location, using the elasticity of the cloud.

 

Medical institutions not only have far better access to data at scale but are increasingly sharing it with their counterparts to build academic and publicly-available data lakes to transform their research capabilities.

 

Wider scope for innovation

Before the cloud age, organisations needed to bring the person to the data locked in large on-premise computing setups, at head offices or main production locations. Advances in connectivity and cloud based object storage has enabled data strategy to detach itself from compute resources’ locations.

 

Currently, we are enabling more effective data analysis on a bigger scale and with it, greater scope for innovation than ever before.

 

Today’s companies bring the data to the person, enabling engineers and data scientists to work off the data wherever they are located, analyse bigger and more multi-faceted data sets and identify innovations more effectively, powering a future of unprecedented potential for new invention.

 


 

John Capello is Field CTO at Nasuni

 

Main image courtesy of iStockPhoto.com

Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings