How machine learning helps oil the wheels of production

Production line fault detection has often thrown a spanner into the works – but innovations in machine learning are improving it at pace, writes Zita Goldman

One of the new fads in tech discourse is “combinatorial innovation”. According to this rule, if emerging new technology has a rich set of components, these sub-technologies will keep cross-pollinating each other and combine into new products as innovators work through all the possibilities.

Recent examples of combinatorial innovation include combining edge computing with 5G, or cryptocurrencies with green energy, and artificial intelligence with IoT, to create AIoT.

An area increasing in significance recently, as a result of combinatorial innovation, is machine vision. This has been a key component of IoT and its industrial strand, IIoT. But its usefulness plateaued until advanced technologies of machine learning, especially deep learning, emerged.

To give another example, deep neural networks (DNNs) couldn’t come into their own without graphic processing units (GPUs), similar to the one your computer uses to process images and video. However, their multitasking capacity gives them far more potential than just providing the graphics for your mobile phone, PC or Xbox – they’ve now become essential to enhancing the performance of deep learning systems too.

But where is the multitasking excellence of GPUs put to good use in machine learning? In the case of “Convolutional Neural Networks” (CNN) – the type of deep neural networks ideal for analysing visual imagery – using GPUs can produce four times as much processing power than hardware without a GPU would.

CNNs are simply a type of neural network that employ a “convolution”. When being used to analyse an image, the CNN first extracts the key features in a picture (in a picture of a cat, the face, legs or ears, for example). Then, in the classification stage, it establishes the probability of each feature of the picture being what the algorithm predicts it to be. For example, in the case of a cat’s mouth, there might be a slight probability that it’s a dog’s mouth, with also some chance, albeit tiny, that it is a hat or a mug. The system can then take a decision based on these probabilities, such as recognising that an image has a certain value such as a name (such as “cat”) or even a quality (such as “faulty”, “ripe” or “too large”).

How do CNNs take machine vision and quality inspection to the next level?

Fault detection by machines is a rather complex affair. The algorithm has to recognise edges, chinks, burrs, broken seams and all sorts of unpredictable anomalies in manufactured products. When done manually, programming algorithms that can recognise all possible types of errors involves the analysis of hundreds of thousands of individual images.

Meanwhile, DNNs can learn error features by themselves and – based on those – define each and every problem class accurately. Also, self-learning algorithms’ error margins can get very close to 0 per cent, whereas manual programming’s is about 10 per cent on average. Moreover, the higher accuracy of new algorithms is further enhanced by technological advances in industrial image capturing such as stereo cameras.

Use-cases of CNN-enabled computer vision abound outside defect inspection too. Automated lane detection and sign reading in cars, identifying diseases in healthcare, automated damage analysis of assets and crops in the insurance industry are all solutions that we already encounter in our daily lives. Motion tracking and object navigation are also of key importance in factory settings, where robots and humans increasingly work together in an environment that requires exceptionally strict health and safety monitoring.

But computer vision is still far from maturity and manufacturers can’t reach plug-and-play solutions off the shelf just yet. Pre-trained networks, however, which already know the basics of identifying image features, can give a major boost to their adoption. With pre-trained solutions, for example, customising CNNs to specific factory applications typically takes a couple of hundred – rather than tens of thousands of – images from each error class and only a basic level of in-house machine-learning expertise.

Since the proliferation of sensors in corporate, industrial and urban environments started, there have been debates about whether the terabytes of data generated this way should be captured at all without a clear understanding of what this tsunami of information can be used for. The latest technological developments in computer vision and machine learning can demonstrate how new combinations of revolutionary digital technologies can give purpose to what only yesterday looked like pointless data hoarding.  


The examples illustrating the workings of Convolutional Neural Networks were taken from FreeCodeCamp, a non-profit website offering free training and self-learning opportunities in coding.

© Business Reporter 2020