After comparing a sample of artificial intelligence (AI) applications used by the different Countries' revenue bodies, this paper questions the adequacy of the existing and proposed regulatory frameworks. While AI enhances efficiency by identifying abnormal behavior and reducing repetitive tasks, it also raises issues related to legality, transparency and fairness. Automating tax audits with the most up-to-date machine learning tools may yield accurate results that are however difficult to interpret and validate, thereby undermining the administrative duty to state reasons and taxpayers' right to defense. Moreover, since self-learning algorithms learn from the past, AI can perpetuate historical patterns of discrimination due to biased data or training. Regarding the early-compliance phase of tax procedures, AI-powered virtual assistants can advise taxpayers on countless doubts, without stating clear boundaries and rights: therefore, the users might not be aware of the non-binding nature of advice and be audited contrary to favorable responses. With the approaching of the approval of the EU Regulation on AI, challenging the proposal’s current wording, this study advocates for a robust regulatory framework to strike a fair balance between administrative efficiency and taxpayers' rights.
Dopo aver confrontato una serie di applicazioni di intelligenza artificiale (IA) utilizzate da enti impositori di vari Stati, l’analisi si sofferma sull’adeguatezza del quadro normativo esistente e sulle proposte di riforma. Da un lato, l’IA accresce l’efficienza amministrativa identificando tempestivamente situazioni anomale e sgravando i funzionari da attività ripetitive. Dall’altro, pone però questioni di legalità dell’imposizione, trasparenza ed equità. L’utilizzo a fini istruttori o accertativi dei più avanzati algoritmi di auto-apprendimento può dare risultati accurati, ma difficili da interpretare, violando l’obbligo di motivazione degli atti impositivi e, dunque, il diritto di difesa dei contribuenti. Inoltre, poiché gli algoritmi di autoapprendimento procedono sulla base di esempi, potrebbero perpetuare modelli storici di discriminazione, a causa di errori o pregiudizi insiti nei dati o nell’addestramento. Riguardo all’attività di consulenza giuridica delle amministrazioni, gli assistenti virtuali alimentati dall’IA sono in grado di rispondere ai più svariati quesiti dei contribuenti, ma mancano chiare statuizioni normative sulla tutela del legittimo affidamento in caso di revirement. Con l’approssimarsi dell’approvazione del Regolamento UE sull’IA, si commenta criticamente l’attuale formulazione della proposta, poiché esonera le autorità fiscali e doganali dall’obbligo di rispettare requisiti di qualità elevata dei dati, trasparenza, sorveglianza umana, precisione e robustezza.
1. Risks and opportunities associated with automated decision making by tax authorities - 2. A comparative overview of automated decision making in tax procedures … - 2.1. … for guidance and early-certainty purposes - 2.2. … In taxpayers' selection and tax auditing - 3. Regulatory challenges in the protection of taxpayers’ rights - NOTE
A recent survey on the digital transformation of tax administrations  shows that most EU member States use artificial intelligence (AI) applications in tax procedures, mainly to assist taxpayers at an early stage or for risk management purposes. These technologies are starting to play a pivotal role in taxation, due to the massive collection of data by the administrations, coupled with the apparent mechanical applicability of many provisions. Algorithmic applications, especially those based on machine learning (ML), have proven effective in the cross-checking of vast amounts of data. By promptly detecting anomalies and taxpayers’ mistakes, advance analytics relieves tax officers from repetitive and time-consuming tasks, enabling administrations to focus on subtler forms of tax avoidance or evasion. This can also benefit taxpayers in several ways. For instance, they can correct errors in a timely manner with little or no penalty or they can use interoperability to easily access all their data held by public authorities. As administrations become acquainted with such applications, challenges and risks for taxpayers’ rights begin to emerge. Apart from privacy and cybersecurity concerns, the use of AI by tax authorities raises issues of legality and fairness of their decisions. In terms of legality, if machine learning is involved, the reasoning may not always be straightforward. To borrow a popular metaphor, the most advanced AI applications (particularly “deep learning” models) appear as black boxes that connect input and output data without revealing their inner workings . Statisticians and AI experts have warned about the trade-off between prediction accuracy and model interpretability , meaning that accuracy tends to increase at the expense of interpretability. In other words, groundbreaking self-learning tools (e.g., neural networks), easier to train and much more accurate than traditional decision trees, produce results that are difficult to interpret and validate. Thus, their use by tax authorities without sufficient human oversight may conflict with the rule of law, namely by undermining the duty to state reasons and, ultimately, the taxpayer’s right to defense. When it comes to fairness, AI is often endorsed for its neutrality: it is perceived as an asset to any organization because it seems to be able to circumvent human bias and avoid discrimination or preferential treatment based on an [continua ..]
Before discussing the current experiences with AI in tax procedures, a few historical notes on the development of AI show how it has always interacted with taxation. Due to the complexity and apparent mechanical applicability of many fiscal provisions, back in the seventies, one of the first ever developed AI applications was specifically designed for tax purposes. It was an “expert system”, called “Taxman”, able to classify under the relevant fiscal provision the facts of a given corporate restructuring, so as to detect the right tax treatment . It relied on the translation of fiscal provisions into a complete set of “if-then rules” (knowledge base) and an inference engine using logical inference rules for deduction . Expert systems could solve complex queries efficiently, thereby supporting users in the identification of the relevant rules and automating the more mundane aspects of their job. However, these systems never took off: unable to learn from data, they were limited by the knowledge explicitly programmed into them and often required manual updates to stay relevant . Moreover, the “if-then” logic was nonsuited to represent the often multifaceted reality . Building upon Alan Turing’s insights into “learning machines” , the research on AI has since shifted its focus from the detailed programming of the knowledge bases to the power of data. In other words, instead of providing the machine with a full system of logical inference, developers began to design machines that were «as simple as possible consistently with the general principles»  but capable of extracting knowledge from data (e.g., through pattern recognition). At the core of both approaches lie algorithms. An algorithm is «a finite set of rules that gives a sequence of operations for solving a specific type of problem» . While the first mentioned approach, often referred to as “symbolic”, uses a set of fixed rules manually programmed into the machine, the second approach (“non symbolic”) learns the rules or improves their performance based on data. The latter is based on “learning algorithms”, that is the process of learning or adjusting a function from input data (“unsupervised learning”), input and output data (“supervised learning”) or reward signals (“reinforcement [continua ..]
Tax administrations employ AI in risk management not only to select taxpayers and prioritize their returns for audit purposes, but also to prevent non-compliance. This may be done in several ways, which raise different legal issues. The OECD “Tax Administration” Series highlighted repeatedly that a growing number of administrations have been setting up virtual or digital assistants, such as “chatbots”, to help respond to taxpayer enquiries and support self-service . Compared to general rulings published on the administrations’ websites, these tools, especially if fueled by machine learning, provide automated yet tailored solutions that enhance guidance services. Besides, they are available all the time, regardless of opening office hours, and proved useful during the pandemic . A recent survey shows that in 2020, among fifty-eight national tax administrations, 72% of them – including those of twenty European countries – were using (60%) or implementing (12%) virtual assistants (e.g., chatbots or voice bots) . The newly established “Inventory of Tax Technology Initiatives” reports that roughly one third of them is rule-based, meaning that interactions with taxpayers follow a set of pre-programmed rules; thus, their answers are easy to interpret but not too accurate. A little less than a third of them uses machine learning, that presumably gives more accurate replies. The remaining part integrates both approaches . The US Internal Revenue Service (IRS) set up voice and chatbots that simulate human conversation and use an AI-powered software to respond to natural language prompts. They are currently unauthenticated, meaning they cannot answer questions about a specific taxpayer account, but the IRS planned to launch more advanced authenticated bots that would allow access to taxpayers’ IRS accounts and be able to set up taxpayer-specific instalment agreements . Currently the IRS “Interactive Tax Assistant” contains a disclaimer warning taxpayers about the fact that they cannot rely on the reply as if it was a private ruling. Therefore, the IRS retains the power to collect additional tax and penalties if taxpayers act in accordance with incorrect answers . Another example is that of Spain, that resorted to a rule-based virtual VAT Assistant when implementing the e-invoicing reform. The virtual assistant is trained on decision [continua ..]
Unable to audit all taxpayers, tax administrations are constantly seeking effective methods of monitoring non-compliance and selecting taxpayers for audit. Moving away from random audits, they have been testing annual selections of specific categories of taxpayers, parameters and statistics to identify unusual positions and, more recently, AI-based risk-management methods . AI is also being used to gather evidence of tax evasion and to automate – partially or fully – tax assessments. According to the already mentioned “Inventory of Tax Technology Initiatives”, the revenue bodies of sixteen Countries in Europe use artificial intelligence in their risk assessment analyses or to detect tax evasion and fraud . The tax administrations of eleven European Countries report using AI to assist tax officials in making administrative decisions or to make recommendations for actions, but only one of them employs AI to make final administrative decisions (Albania). For these purposes, only one of them (United Kingdom) reports having an ethical framework in place for the application of AI , while ten of them report having limitations in place, such as the prohibition to use AI to make final administrative decisions. AI-based risk-assessment methods are widely used in France, Italy and Germany. France gained a significant experience with the use of data mining algorithms to tackle VAT and personal income tax frauds. After creating a data warehouse that collects all the relevant data from multiple sources that used to be compartmentalized (e.g., tax returns, bank account files, social security data, etc.), algorithms cross-check them, uncovering inconsistencies, and compare them to models of fraudulent behavior . The system does not merely select taxpayers for subsequent audit, but it may send them automatic requests for information when detecting mistakes. They can escape a more in-depth audit by timely correcting those mistakes. Moreover, to counter certain tax violations, a law was passed in 2019 to allow tax authorities, on a trial basis, to collect and use taxpayers’ freely accessible content (e.g., on social media or online marketplaces) by means of “computerized and automated processing” (except for any facial recognition system) . Italy has recently set up a similar scheme, by making administrative databases interoperable  and scanning them with AI algorithms [continua ..]
From a policy perspective, the above mentioned “childcare allowance scandal” reveals the lack of a regulatory framework to address the challenges posed by AI. The framework on data protection does provide some principles that might be invoked to counter such risks, but their scope is limited to the treatment of natural persons’ data. Besides, national data protection authorities are supposed to counter countless issues (e.g., the safety of minors’ online activities, the use of sensitive data etc.) with limited resources, while AI is starting to become ubiquitous. According to case law in different European countries , three main rights should be granted when public administrations rely on automated decision-making tools: the right to algorithmic transparency, the right to human intervention and the right to protection against discrimination. These rights are mainly inferred from the right to private life enshrined in the European convention on human rights and in the EU Charter of fundamental rights . The right to private life is further developed in the General data protection regulation (GDPR), according to which individuals have the right to be informed about the existence of automated decision making , including profiling, and the right not to be subject to a decision based solely on these techniques . For multiple reasons, this framework does not appear to offer taxpayers a minimum level of protection. First of all, the GDPR allows EU member States to restrict the scope of those rights to ensure other important objectives of general public interest, including taxation matters . Furthermore, the principle of tax secrecy, preventing reverse-engineering of tax audits, is difficult to reconcile with the principle of algorithmic transparency. While such a limitation is reasonable, other restrictions should be avoided or reconsidered, if currently in place. That is the case of limitations to the rights to access and rectify personal data held by tax authorities and to the right to refuse fully automated fiscal decisions, advance rulings included. Not even the draft Regulation on AI appears to properly address these issues. The “EU AI Act” proposal, following a risk-based approach, imposes regulatory burdens only when AI systems are likely to threaten fundamental rights and safety. That is the case, for instance, with systems used by law enforcement authorities in [continua ..]