People have always committed fraud and always will – and some could be your employees. However, the tools have changed, particularly in the past decade, and technology has multiplied the opportunities, resulting in an exponential increase in cyber-crime.
On the flip side, fraud investigators have many more sophisticated IT tools to prevent and detect this. So who will win the ongoing fight against fraud?
Why recruitment processes can’t eliminate employee fraud
Any robust fraud prevention strategy must encompass both the human and technical sides of fraud. Studies suggest that people fall into three equal groups when it comes to committing fraud. A third are completely honest and will not perpetrate fraud whatever the circumstances. Another third will be open to committing fraud depending on the circumstances. The final third comprise the “rotten apples” who will always seek to defraud their employer.
Sifting out the rotten apples
Organisations should always seek to employ the first third and not the last third – but clearly it’s not quite as simple as that. A well-designed recruitment screening process can remove rotten apples by confirming key dates for a candidate’s employment and education history, seeking proof of qualifications, and taking up verbal references (which may reveal more than a formal process). Explanations for gaps in work or education history should be sought, and of course criminal record checks carried out where appropriate or allowable.
None of this will identify the middle third, who by definition appear to be model employees until certain circumstances occur. According to the fraud triangle theory, these circumstances arise when two of three elements are present: for fraud to occur, the theory goes, two of three elements (pressure, opportunity and rationalisation) should be present.
Most of us believe we would never commit fraud but there may be a certain point in someone’s life, or a set of exceptional circumstances, where that may change. Triggers for pressure could include an expensive divorce, serious illness in the family, or a costly habit or addiction. The resultant pressure could arise years after the employee is hired, so the recruitment screening process will never identify it.
The higher an employee rises through the corporate ranks, the greater their opportunities tend to be and the more danger they represent. They have won colleagues’ trust (“I can’t believe he would have done that – we worked together for 15 years. I even went on holiday with him!”). They can commit the organisation to major transactions and know the controls in place – and the weaknesses in those controls. All this creates an opportunity that, combined with the right pressure, can lead someone from the middle third to commit a fraud.
This example shows why technology is an essential complement to human anti-fraud activities. Here, a technical solution is required to catch any signs of fraudulent behaviour or red flags that weren’t picked up by the recruitment screening process.
The power of artificial intelligence to detect fraud
Artificial intelligence, or AI – defined as “computers acting in ways that seem intelligent” – has been in commercial use since the 1970s in the form of rules-based expert systems. But until recently, the application of AI was constrained by computational power and data availability. Those constraints are now gone.
Modern, cloud-enabled AI can operate in real time or close to it. It can concurrently analyse transactional data alongside, for example, chat room or email communications and customer relationship management systems. With algorithms tuned for organisational specifics, more fraud can be detected faster, and with less human involvement.
But the aim isn’t to engineer humans out of the equation: it’s to create adjoined networks of humans and computers “acting in ways that seem [even more] intelligent.” The objective of identifying more fraud must be balanced against the numbers of machine-identified false positives requiring human resolution. With sufficiently large data sets, various permutations of machine learning become available and relevant, and promise to make analyses ever more effective.
Advanced AI solutions take advantage of both database and textual information from internal and external sources. Freely accessible sources, such as the UK’s Companies House, can verify against reference data. A technique used as part of the FTI Augmented Investigations® capability automatically identifies and acquires data about entities mentioned in, for example, adverse media reports. By iteratively acquiring information on companies and their directors, and then those directors’ other company affiliations, and so on, a network of related parties is created, and can then be linked to internal information.
Graph databases, made famous by the Panama Papers data breach, reveal relationships between entities. As well as presenting data visually for human review, these can mathematically identify key participants within a large group, given a sufficiently comprehensive dataset.
Focusing narrowly on specific problems allows for more efficient identification, but may miss variants, leading to false negatives. Conversely, tools that aspire to identify fraud more broadly will create more false positives and require more human engagement to resolve alerts – initially, at least.
It’s easier to use technology and data to prevent fraud than to catch it. Employee onboarding tools should make the most of available reference data and well-designed risk measures.
Like the pendulum on a clock, fraud cannot be overcome without technology powering it on one side and human analysis of data and behavioural patterns on the other. The swing from one side to the other makes investigation teams function as they should, with each element complementing and enhancing the other.