Optimize Fraud Detection When Administering Government Grants

Use Cases & Projects, Dataiku Product, Scaling AI Colleen Chen

Since the Spanish flu, countries around the globe have established government organizations and departments of health in order to implement public health mandates and coordinate and manage national health responses during times of crisis (i.e., the global health crisis that began in 2020 or natural disasters such as droughts, bushfires, and floods, among others). While the ultimate goal is to provide economic support to impacted communities, oftentimes disaster payment measures are introduced hastily (given the urgency of these disasters) and with inadequate fraud controls. 

→ Build an ML-Based Fraud Detection Program for Government Grants

Now, to be clear, some fraudulent activities will take months and years to be revealed. But there are actionable steps that legal, policy, risk, and data professionals in government can take to help them build transparent, fair, and explainable models using Dataiku’s code-free tools and ensure robust, automated government decision making. The full project — a use case based on the Australian government ‚ involved building a machine learning-based grant fraud detection model in just weeks — can be visualized in this step-by-step guidebook, but we’ve condensed the key steps here to keep in mind if you replicate this project on your own:

1. Define the Goal

In order to articulate ROI for the project, its impact must first be understood. For starters, the World Bank shows that the international benchmark for fraud in government social security programs is between 2% and 5%. Looking at the Australian government, for example, with a $17.6 billion economic stimulus program, this translates to an amount between $352 million and $880 million, which is not small potatoes. In fact, it surpasses the Australian government’s four year-total direct investment in AI and the total spent on the new National Recovery and Resilience Agency. It’s a proof point that a modest fraud reduction is worth pursuing. 

Further, the issue must be solvable with data and the organization needs to have access to the right data. When determining whether this unique program was in fact solvable with data, I referred to the common characteristics of fraud associated with pandemics identified by the Australian Institute of Criminology and selected three prevention and detection techniques from the Commonwealth Fraud Prevention Centre that were suited to a data-driven solution:

  • Adopting clear and specific eligibility requirements
  • Verifying claim eligibility by cross-referencing internal or external sources 
  • Applying fraud detection software programs and processes

2. Get the Data 

When it comes to anomaly detection, the more data the better. Using multiple types and sources of data is what allows a project to move beyond point anomalies into identifying more sophisticated contextual or collective anomalies. Since I didn’t have access to all the relevant government data, I collected three datasets containing information about businesses held by government agencies, such as the taxation office and the corporate regulator from data.gov.au and then added a few datasets containing the structured and unstructured data typically present in grant applications from Kaggle and the Snowflake COVID-19 Data Share.

3. Clean the Data

Next, I needed to understand the data (which came from a myriad of sources) and make sure it was suitable to be used and ready to be merged into one dataset on which the ultimate fraud prediction model would be built. I checked the data for quality, consistency, and relevance and also reviewed the data distribution, shape, and skewness to ensure that I could assume normal statistical distribution when building predictive models downstream.

4. Build a Model

To build my model, I drew on three countermeasures published by the Commonwealth Fraud Prevention Centre. Countermeasures are strategies for preventing or limiting the size of the fraud risk by reducing the likelihood and consequences of fraud. While the most relevant countermeasures will vary depending on the situation, I focused on prevention countermeasures since they are the most common and cost effective way to stop fraud:

  • Fraud Countermeasure 1: Clear and Specific Eligibility Requirements (i.e., ability to update and rebuild the model quickly without worrying about dependencies)
  • Fraud Countermeasure 2: Cross-Referencing Internal or External Sources (i.e., verifying claims through the application of natural language processing and graph theory)
  • Fraud Countermeasure 3: Modeling and Detecting Deliberate Fraud Activities (i.e., predicting and flagging high-risk applications during the grant application review process)

5. Explainability and Accountability

Besides performance metrics, the reporting features in Dataiku provided detailed information on the expected impact and potential biases of the model to help me understand and communicate opportunities for improvement and mitigate risk. The ability to do so would be vital for government agencies to remain compliant with administrative law if they managed grant programs supported by automated decision-making systems.

6. Analyze Model Fairness

Beyond explaining the model’s performance, I examined the fairness of the model to identify any harmful impacts on segments of the community. This step requires a value judgment and input from the community. Fairness is not a legally defined term and is context-dependent, with one researcher having identified 21 different definitions of fairness across academic literature. 

For a government agency, the absence of statistical bias in a model alone is unlikely to render the model fair if disadvantaged groups and individuals from the community perceive bias. As a result, it is essential to uncover group and individual bias and articulate any trade-offs made with stakeholders to achieve community acceptance in deploying an AI system. 

7. Review Model Performance

No model is perfect, and articulating the impacts and trade-offs between model performance and fairness considerations is key for all decision makers. The transparency around any trade-offs could also create a more informed and mature discussion about how to balance operational risk in public administration. 

The three approaches I used to filter applications for risk of fraud helped me narrow down the application pool to 5% of the original dataset. To reduce the risk of rejecting genuine applications, I merged the flagged applications into a table for manual processing. Further, incorporating humans in the loop could mitigate any administrative law risk of acting without legal authority or acting under dictation by an automated decision making system. 

As a reminder, this is an immensely abbreviated overview of the full fraud detection project, so I encourage you to check out the full guidebook for all of the details on how fraud can be reduced incrementally and confidently through a number of transparent, data-driven strategies.

You May Also Like

Alteryx to Dataiku: Working With Datasets

Read More

Demystifying Multimodal LLMs

Read More

I Have AWS, Why Do I Need Dataiku?

Read More

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More