Autori:
Aragona, Biagio,
Chianese, DarioTitolo:
Towards socio-technical Ai risk management for Digital twinsPeriodico:
Rivista di Digital PoliticsAnno:
2025 - Fascicolo:
1 - Pagina iniziale:
79 - Pagina finale:
93Digital twins’ (Dt) ever-increasing reliance on Artificial intelligence (Ai) systems calls into question the appropriateness of adopted approaches to risk management. This contribution aims to provide a clearer understanding of current developments and shortcomings in Ai risk management, calling for a more practice-oriented approach in addressing the challenges encountered in Dt solutions that make use of Ai systems. Ai risk management strategies stem from sector-specific regulation, while increasingly taking account of more general Ai system usage guidelines developed at a national and international level. Although existing frameworks lend themselves to a wide variety of scenarios by virtue of their flexibility, resources of this kind lack the level of detail and standardization usually found in other high-risk sectors such as financial services, healthcare, aviation, and nuclear energy. Since Ai systems as well as Dt are now being employed in high-risk applications, similar safeguards have to be considered when possible. We argue that a cross-sectoral protocol to facilitate the design, development, deployment, and management of Ai systems in Dt at an operational level is needed. Serious efforts to provide robust, verifiable, and widely applicable measurement methods for Ai risk should be considered, despite the difficulties and limitations such an approach entails. A precise description of human roles and responsibilities regarding Ai systems is necessary, and it stems from a clear categorization of both the system’s autonomy and the level of risk its usage entails. When Dt employed as policy instruments leverage Ai systems, they call for this level of detail, employing actors from different fields and the full range of stakeholders from the design to the monitoring stage. Describing and auditing for existing and potential biases throughout the Ai system’s lifecycle is therefore crucial and best done in a timely manner, as biases inherent in an already designed system can be difficult to tackle as they are potentially amplified during deployment, propagating harm and affecting policy options.
SICI: 2785-0072 (2025)1<79:TSARMF>2.0.ZU;2-I
Testo completo:
https://www.rivisteweb.it/download/article/10.53227/117423Testo completo alternativo:
https://www.rivisteweb.it/doi/10.53227/117423Esportazione dati in Refworks (solo per utenti abilitati)
Record salvabile in Zotero
Biblioteche ACNP che possiedono il periodico