AI Governance Podcast with Daniel Turner

In this blog post, we summarize a podcast from with Daniel Turner from FinCrime Dynamics where we discuss AI governance, LLMs, regulations and some solutions for the financial crime sector. For a full audio recording and trascript, please follow this link: The European Regulation Act is a significant focus in the AI regulatory landscape. Unlike the US, which applies current regulations to AI, Europe is drafting its own unique regulations. The UK aligns closely with Europe but has some sector-specific variations. As a key player, Europe’s actions are likely to influence regulations worldwide. The European Act’s requirements are broken down into four categories: data, compute, model, and deployment. A benchmark called the Holistic Evaluation of Languaging (HELM) helps assess language models against different aspects. Regarding data, companies must describe the data sources used to train models, but this can be complex, especially for large-scale models using data mining or scraping. Obtaining permissions for data access can be challenging, and data quality, bias, and safety must be considered to comply with data governance laws like GDPR. Compute refers to the energy consumption and the computational resources needed for building AI models. Europe is environmentally conscious and may require disclosure of compute usage. Smaller companies might face limitations compared to larger competitors. In the model category, companies must outline the model’s capabilities and limitations, assess risks, and implement mitigations before deployment. Explainability is crucial, but commercially available large language models (LLMs) lack access to underlying training data, making it harder to achieve transparency and explainability. Deployment involves disclosing that the content is machine-generated and specifying the member states within the European market where the model will operate. Downstream documentation is essential for the model’s technical compliance with the European AI Act, enabling others to use the model transparently and responsibly. Overall, the European Regulation Act aims to ensure ethical AI deployment and protect against potential risks and biases. As the AI landscape evolves, regulations like these will continue to be refined to promote responsible AI development and usage. Daniel Turner discussed the concept of AI trustworthiness and the challenges of machine learning in the context of financial crime. He introduced FinCrime Dynamics, an intelligent sharing platform that simulates and models complex behaviors, particularly in financial crime, to test and train machine learning models. The platform uses agent-based modeling, where simulated agents (accounts/customers) engage in financial transactions and behaviors. This enables institutions to understand and detect new and evolving criminal schemes, even those they have not encountered before. By sharing intelligence and using synthetic data created from simulations, machine learning models can be trained to recognize and respond to new threats effectively. The podcast also touched on the analogy of using vaccines to combat viruses, with the simulations acting as proactive defenses against evolving financial crimes. The discussion concluded with the mention of an upcoming consortium focused on trustworthy AI and intelligence sharing to combat financial crime. To learn more about FinCrime Dynamics and the consortium, interested individuals can get in touch with Daniel Turner through the appropriate channels.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top