Digital Technologies, Tax Administration, and Taxpayer Rights

MHatfield Photo.jpg

Michael Hatfield is the Dean Emeritus Roland L. Hjorth Professor of Law, University of Washington.  He researches digital technologies and taxation, the professional responsibility of tax lawyers, and tax education in law schools. His e-mail address is mhat@uw.edu

Tax researchers are increasingly focusing on the impacts of digital technologies on tax authorities and taxpayers. In June 2023, Benita Rose Mathew[1] of the Surrey Institute for People-Centred Artificial Intelligence spearheaded a workshop on digital innovation in tax administration with participants from universities, tax authorities, companies, and NGOs from around the world.[2]  A similar mix of participants gathered in Antwerp in June 2024 for the 9th International Conference on Taxpayer Rights. Convened by the Center for Taxpayer Rights and hosted by the DigiTax Centre at the University of Antwerp.  The focus was “Towards a Digital Taxpayer Bill of Rights[3].”

About four dozen experts participated in about a dozen sessions over three days in Antwerp. Discussions centred on taxpayer rights to transparency, fair treatment, and human intervention, particularly in the context of AI blackboxes, data collection and accuracy, and the incomparable speed and scale at which digitized systems can inflict harm. The following are a few points that struck me as especially interesting.   

GDPR and the AI Act 

Within the EU,[4] digitalization implicates both the GDPR and the recent AI Act. As taxation is an important objective of public interest, the GDPR permits a state to limit the rights of data subjects and the obligations of a tax authority (as a data controller) , provided any limitation of rights is necessary and proportionate and the essence of the data subject’s fundamental freedoms and rights remain protected. Philip Baker[5] explained that the meaning of this power to limit rights and obligations in tax administration is  unclear as key issues have yet to be addressed by the EU Court of Justice.

The EU AI Act, now coming into force, imposes different levels of regulations based on a classification of AI applications.  Applications classified as “high risk” are subject to the most regulation.  The Act includes a list of AI uses that will be classified as high risk.   For example, AI used to determine eligibility for public benefits or used for  criminal law enforcement or that uses biometric data is considered “high risk” and thus subject to the most regulation.  Sylvie De Raedt[6] highlighted that, although tax administration is not explicitly listed as high risk, it may be in specific situations For example,  tax authorities using biometric data or determining eligibility for public benefits would be high risk.

Baker questioned whether, under European Convention on Human Rights criteria the risk of administrative penalties, not just prison time, means much of tax administration should be classed as law enforcement, and, thus, high risk.

The deeper issue regarding the extent to which tax authorities should be limited by non-tax laws is what sets tax authorities apart as exceptional. Is it that taxes are the lifeblood of government?  Historically, this has been the US approach, Nina Olson explained.[7] This leads to fewer limitations on tax authorities.  Conversely, Baker argued that tax authorities are exceptional because they collect, hold, and process the most personal data on the most persons.  Thus, he argued, tax authorities involve the riskiest data activities, and their AI and data uses should be subjected to more, not fewer data protection and human rights safeguards.

Privacy and history

National histories should be expected to influence the politics of how tax authorities use AI.  Magnus Kristoffersson[8]described the traditional openness of  records held by Swedish authorities, including parts of personal tax records that taxpayers in many other nations would expect to be confidential. Baker speculated that the US and UK governments have lagged in developing data and AI protections because, unlike much of Europe neither experienced invasion or dictatorship in the 20th century. He attributed the EU’s lead in data protection to living memories of invasion, dictatorship, and the misuse of government-held information for persecution.

Glitches and incremental steps

Lotta Larsen[9] reminded us that digitized systems will always have bugs and glitches, and system designers will struggle to keep up with societal changes. Keeping up with technological changes continues to prove difficult for some tax authorities, such as the IRS. David Padrino, the IRS Chief Transformation Officer, cited new funding aimed at modernizing the IRS. He said there is no IRS-wide digital transformation strategy.  The focus is on incremental projects. One such project is the introduction of “Direct File,” enabling taxpayers to file returns directly with the IRS. His pride in the success of this program, even though he acknowledged that taxpayers elsewhere have long been able to file directly with their tax authorities, underscored how far the IRS is from the type of digital revolution the AI salesforces hype. 

Cautionary tales

Dirk Van Rooy[10] presented the Australian Robodebt scandal and David Hadwick[11] discussed the Dutch childcare benefits scandal as cautionary tales for tax authorities. The Robodebt scandal arose from automating processes to recover overpaid welfare benefits.  Robodebt compared welfare recipients’ reported income with their tax data held by the Australian Tax Office, but it used averages of the tax data, which generated inaccurate debt calculations for those with fluctuating incomes.  The beneficiaries targeted for collection were not informed as to how their alleged debts were calculated; there was no process to challenge the alleged debts; and there was little human oversight.  A half million alleged debts had to be recalculated by hand. In a class action settlement, the government paid $1.2 billion.   In addition to the obvious process deficiencies, Van Rooy identified ideological assumptions about welfare recipients as critical to the flawed design of the system.

Hadwick highlighted the role of mistaken assumptions in the Dutch childcare benefits scandal in which non-Dutch recipients were disproportionately targeted. The problem was human bias against the non-Dutch resulted in sampling biases in the data used for machine learning. This led to discrimination against the non-Dutch as a feature rather than a bug of the automated system. Exacerbating problems for those targeted was a decision to require reimbursements not based on individual misconduct but rather a “hunch” about the percentage of individuals who would be non-compliant if they had certain social relationships (e.g., parents of children served by a childcare-provider previously connected to fraud). Hadwick’s message was that the scandal was not caused by artificial intelligence but rather politically motivated instead of data-based theorizing. Hadwick emphasized that humans must not be removed from automated-decision making processes, that is, there should always be a “human-in-the-loop” but that this is insufficient; the humans must be subject to a duty of care (e.g., to use data rather than hunches when building the system).

Developing countries 

Rhoda Nyamongo[12] addressed the challenges faced by tax authorities in developing countries implementing AI systems.  One is the shortage of personnel who are trained in data science or taxpayer rights.  Like Hadwick, she emphasized that humans-in-the-loop are not sufficient.  What is needed are humans who understand what they are doing and what the AI system is doing and work carefully. A second challenge is that many developing countries are in the early stages of AI and data protection regulation.  A third is that tax authorities in developing countries often are locked-into using off-the-shelf software that they are unable to improve.  A fourth challenge is corruption, which means taxpayers in some developing countries may trust an AI-generated decision more than that of a tax agent.

Personal involvement 

The role of tax agents in digitized tax administration was a recurring topic. De Raedt noted that digitization reduces human contact between taxpayers and tax authorities. Vincent Vercauteren[13] lamented how already tax agents do not become personally acquainted with the taxpayer being audited but rely exclusively on e-mailed demands for the taxpayer’s digitized books and records, which leads to worse results for both the agency and the taxpayer.  Nina Olson[14] worried that one ultimate effect of digitization will be to exacerbate inequities with high income individuals and corporate taxpayers receiving more human interaction while others will be forced to rely exclusively on AI-delivered services.

Data from third parties 

Tax authorities will increasingly rely on third parties to provide data for AI analysis. Alessia Tomo[15] argued that transparency as to what the government is doing is a cornerstone of democracy, which means that taxpayers have the right to know that tax authorities are engaged in web-scraping, for example.  She argued that, not only is the re-use of this data practically and legally problematic, it also does nothing to improve voluntary compliance.  If the agency’s goal is improving compliance rather than merely detecting tax cheats, this is an inadequate strategy as taxpayer compliance is unaffected if taxpayers do not know what information is being collected.  

Bigger picture

Several speakers suggested the importance of tax researchers considering the bigger picture. Raffaele Russo[16]characterized much of the discussion at the conference as simply talking about AI doing what tax agents have done for 50 years.   Instead, he characterized this as a moment for thinking outside the box, thinking how radically different not only tax administration but substantive tax law could be. Diana van Hout[17] suggested considering whether substantive tax law should be reformed to reduce the need for sensitive personal data (e.g., medical expense information).  Larsen encouraged pondering who ultimately will control the data (a handful of multi-national corporations)?

Digital Taxpayer Bill of Rights

The goal of the conference was producing recommendations for a Digital Taxpayer Bill of Rights.  It is expected to be published around the New Year.  Materials and recordings will soon be available at the Center for Taxpayer Rights site.  

Footnotes

[1] Benita Rose Mathew, Lecturer in AI and Fintech, University of Surrey.  See BM Bio

[2] This was the AI in Tax, Audit, and Fintech workshop sponsored by the Turing Network and various units within Queen Mary University of London, University of Exeter, and the University of Surrey.  See the Programme.

[3] The conference was sponsored by Tax Notes, the International Bureau of Fiscal Documentation, the American College of Tax Counsel, Caplin & Dysdale, the International Fiscal Association, and EY-Belgium. See the Agenda.

[4] Post-Brexit, the GDPR is retained law in the UK.

[5] Philip Baker, OBE, KC, Field Court Tax Chambers; Visiting Lecturer, Faculty of Law, University of Oxford. See Bio.  

[6] Sylvie De Raedt, Research Manager, DigiTax Centre of Excellence and Assistant Professor, Faculty of Law, University of Antwerp. See SDR Bio

[7] Olson warned of the risks, especially in digitized tax administration of this rationale for tax exceptionalism. 

[8] Magnus Kristoffersson, Associate Professor, School of Behavioural, Social and Legal Sciences, Örebro University.  See MK Bio

[9] Lotta Larsen, Research Fellow and Associate Professor, University of Exeter Business School.  See LL Bio

[10] Dirk Van Rooy is a member of the Centre for Responsible AI and an Associate Professor at the University of Antwerp. See DVR Bio.

[11] David Hadwick, Researcher, DigiTax Centre for Excellence and Senior Researcher, Faculty of Law, University of Antwerp and PhD Fellow in legal fundamental research at the FWO Research Foundation for Flanders. See DH Bio

[12] Rhoda Nyamongo, Research and Teaching Associate, Institute for Austrian and International Tax Law, Vienna University of Economics and Business. See RN Bio

[13] Vincent Vercauteren, Tiberghien Lawyers.  See VV Bio.

[14] Nina Olson, Executive Director of the Center for Taxpayer Rights served as the National Taxpayer Advocate in the IRS for 18 years. See NO Bio

[15] Alessia Tomo, Operational Coordinator, DigiTax Centre for Excellence and Senior Researcher, Faculty of Law, University of Antwerp. See AT Bio

[16] Raffaele Russo, Chiomenti, Italy. See RR Bio

[17] Diana van Hout, Associate Professor, Tilburg Law School, Tilburg University.  See DvH Bio