Note: This article first appeared in the August 2021 edition of the Privacy Law Bulletin

Automated decision-making systems raise serious privacy challenges. The General Data Protection Regulation (GDPR) shows one method as to how they may be regulated.

Automated decision-making (ADM) has become an increasingly prevalent facet of modern society. Both globally and within Australia, ADM increasingly permeates both the public and private spheres, regulating an ever-expanding scope of our lives. This article canvasses the use of these systems within Australia and globally and considers the privacy implications that they can have.

Kerr J of the Federal Court of Australia recently remarked in relation to these systems that: “What was once inconceivable, that a complex decision might be made without any requirement of human mental processes is, for better or worse, rapidly becoming unexceptional”.2

A host of executive departments and agencies across Australia’s federal jurisdictions utilise advanced computer systems to support government decisionmaking — these include the Australian Taxation Office (ATO), Centrelink, the Department of Family and Community Services and the Australian Department of Defence.3 Similarly, ADM systems are increasingly relied upon within the private sector through the following:

  • Programmatic advertising is used by online platform operators in order to automatically generate advertising content based upon view data.4

  • Automated bidding and purchasing software are utilised on online currency platforms to make buy and sell decisions.5

  • Automated face-scanning software is being touted as a viable measure by which the suitability of job applicants can be determined.6

  • Pricing algorithms are routinely employed in underwriting decisions.7

While ADM is not expressly prohibited in Australian law, and Australian law does not have the GDPR off-ramp (yet), there are relevant legal regimes in Australian law, which are dependent on the form of the decision and who is ultimately responsible for the decision-maker, for example:

  • government action — administrative law

  • commercial conduct — contract, consumer protection law and anti-discrimination law

  • employment conduct — employment law and anti-discrimination law

None are a perfect fit but this area of the law is developing fast.

The rise of ADM

Throughout the globe, increasing calls have been made to regulate the use of ADM. Much of this attention has arisen within specific contexts — for example, in response to specific concerns that ADM may entrench and perpetuate existing bias,8 in relation to the intersection between administrative law and ADM,9 or within the context of discrimination law where concerns continue to mount that ADM could even be creating new and novel categories and classes of persons which may be, by nature, beyond human comprehension, and to potentially detrimental effect.10

However, within the specific context of privacy and data protection law attention has been somewhat less focussed.

As highlighted in a report released in 2018 by Privacy International, the aggregation of data can lead to powerful, and deeply private, insights that could cause damage when misused (either intentionally or, as is often the case with ADM, unintentionally) — for example:

. . . when someone calls their best friend, visits a website of the National Unplanned Pregnancy Advisory Service, and then calls their doctor, we can assume that this person is probably thinking about an abortion, or is likely to have an abortion soon.11

The intersection of ADM and privacy law as an issue worthy of consideration in its own right for the simple reason that as ADM evolves (and eventually approaches the level of artificial intelligence), the ability to gather, interpret and utilise personal information in a manner which can intrude on privacy interests increases to a capacity never before seen.

International privacy jurisprudence — the GDPR off-ramp

Internationally, the regulation of ADM has largely fallen within general privacy legislation. Most notable among these regimes is the European Union’s General Data Protection Regulation, more commonly known as the GDPR.

Article 22.1 of the GDPR provides this protection by providing that a data subject is furnished the positive “right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”.12

Art. 22 — Automated individual decision-making, including profiling

1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

2. Paragraph 1 shall not apply if the decision:

(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;

(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or

(c) is based on the data subject’s explicit consent.

3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

Facilitating this provision is Art 21 of the GDPR, which creates a general right to object to data processing, a right which can be exercised for a number of reasons including where Art 22 is breached.

This right is not new in Europe. Though underutilised, a right to be exempt from automated decision-making has existed since 1995 by virtue of Art 15(1) of the European Data Protection Directive 95/46/EC.13

The effect of the GDPR provisions is to create an off-ramp of sorts, which permits the subject of an automated decision to elect to have the decision in question made otherwise than through an ADM.

This off-ramp was tested recently in the District Court of Amsterdam by various claims brought by the App Drivers & Couriers Union against Uber Technologies (ADCU Case).14 In the ADCU case, the union challenged the dismissal by Uber of four drivers who were dismissed primarily due to decisions made by Uber’s ADM systems.

In three instances, this came about where Uber’s ADM system detected (incorrectly) irregular trips associated with fraudulent activities. In the other, an ADM system was installed and utilised with the intention and effect of manipulating the driver’s Uber app which led to their dismissal. In all instances, the drivers were dismissed, given no further explanation pertaining to their dismissal, and denied the right to appeal.

Subsequently, a claim was brought on their behalf by the Union under Art 22 of the GDPR. Thus, the court was tasked with determining the extent of protection which individuals have from “decision(s) based solely on automated processing . . .”

On 14 April 2021, the District Court of Amsterdam, accepting that the decisions in question were “to be regarded as decisions based solely on automated processing, including profiling, and which have legal con- sequences for the plaintiffs . . .”,15 concluded that the extent of this protection was indeed quite wide. The court ordered Uber’s ADM decisions reversed, ordered that Uber undo the deactivation of the drivers’ accounts, and ordered that Uber bear nearly €3.5 million in the Plaintiffs’ costs.

It appears therefore that, at least in the wide array of European countries subject to the GDPR, the off-ramp created by Arts 21 and 22 provides significant protection from ADM systems.

Within the context of data protection and privacy law, this off-ramp may provide protection prospectively by permitting a person to circumvent a known unlawful system from the outset.

It may also provide protection retrospectively where private information or data is misused by an ADM system — in turn, creating a normative effect which protects others by incentivising the creation and use of compliant ADM systems. Effectively this brings ADM within the realm of existing privacy law, ensuring consistency between the privacy standards expected of human decision-makers and non-human decision-makers.

Australian Privacy Principles

In contrast to the European position, Australia does not have legislation which specifically addresses the privacy issues posed by ADM.

A right equivalent to Art 22.1 simply does not exist either within the state or commonwealth privacy regimes,16 nor does any Australian human rights legislation contain a provision of similar effect.17

Certainly, there have been opportunities for the legislature and Australian regulators to consider whether such a right should be engrained in legislation. For example, more than a decade ago, in the Australian Law Reform Commission’s (ALRC) Report 108 For Your Information: Australian Privacy Law and Practice the ALRC noted that there is: “research that indicates that computer software and hardware may not necessarily produce accurate and reliable results”18 and that “the OPC should provide guidance on when it would be appropriate for an agency or organisation to involve humans in the review of decisions made by automated mechanisms.”19

Whilst these concerns do not approach the level of suggesting that ADM systems could lead to privacy breaches through the sophisticated collation and use of data, they are alive to the idea that ADM systems may lead to inaccurate or unreliable results — a similar issue in that the person the subject of the decision is treated contrary to law.20

Whilst the ALRC did suggest there should be some form of guidance on these types of decisions providing human oversight to them (eventually enacted through the likes of the commonwealth ombudsman’s “Auto- mated decision-making better practice guide”21) the enactment of statutory rights to protect against the misuse of ADM were not suggested.

A more recent opportunity to consider the role of these ADM systems, and the potential privacy issues they pose, was the ACCC conducted “Digital Platforms Inquiry”, released in July 2019.22 However, whilst this inquiry does comment on the use of these systems, particularly within the private sector, no comment is made with respect to privacy or data protection principles.

The result of this, as it stands, is that a person aggrieved by an ADM decision can challenge the ADM system’s decision only on ancillary grounds, rather than utilise an off-ramp which provides an as-of-right ability to object to an ADM decision purely on the basis that it was made by an ADM system.

It may appear that a need to provide such a right is premature. However, this is simply not the case. The various issues presented by ADM systems have already been brought before the courts, highlighting the insufficiencies of existing legal principles to cope with these new and novel technologies, and suggesting perhaps that legislative intervention is necessary.

ADM systems in practice: government action

The recent federal court case of Pintarich v Deputy Commissioner of Taxation23 (Pintarich) is an example of one such case, albeit an example within the context of administrative and taxation law, where existing legal principles were showcased as unsuitable when applied to modern ADM systems.

In Pintarich, the ADM system in question was a system utilised by ATO to generate and send letters to taxpayers. This particular system automatically gener- ated and sent to the taxpayer, Mr Pintarich, a letter which communicated to him that the ATO had determined to remit a certain general interest charge (GIC) from his tax bill which he would have otherwise been liable to pay (“first decision”). This letter was received by Mr Pintarich who, acting upon this letter, made a payment to the tax office which seemingly ought to have discharged his entire tax liability.

Problematically however, the ATO later confirmed that the December letter was not as conclusive as it might have first appeared.

The ATO stated that it was “issued in error . . . [and] did not include the entire amount of GIC which had accrued”.24 This error, as would later become apparent, arose when the ATO “keyed” certain information into an automated bulk letter-issuing system and that system manifested a decision entirely absent any subjective process of deliberation on their part. Accordingly, the ATO considered that Mr Pintarich still owed a tax debt and sent him a notice to that effect (“second decision”). Mr Pintarich sought judicial review of the “second decision” made by the ATO under the Administrative Decisions (Judicial Review) Act 1977 (Cth) in the Federal Court of Australia on the basis that it was made ultra vires.25 The crux of his argument was that the original letter, produced by the ADM system, manifested a valid decision, thereby rendering the ATO functus officio26 when it made the second decision.

Ultimately, and despite the insightful dissenting judgement of Kerr J as cited at the commencement of this article, Mr Pintarich was unsuccessful in his review. This meant that he was held liable for the greater amount.

The majority came to this conclusion based on reasoning developed within the context of human decision making in 1999,27 a context divorced from the modern reality of ADM systems and their increasing prevalence, holding that the first decision manifested through the ADM system was in fact no decision at all as it was lacking in the requisite mental element. Therefore, according to the majority, the ATO were not functus when issuing the second decision thereby rendering it valid.

The Pintarich judgement only obliquely raises concerns within the realm of privacy law. However, it certainly shows the very real impact that these ADM systems can have, the insufficiency of the existing legal framework to deal with these new and increasingly prevalent systems, and therefore the pressing need for such systems to be provided for at law in Australia.

Similarly, though not litigated, the recent problems involving the Department of Human Services’ now- defunct “robo-debt” system also highlights the real-world impact that these systems can have.

The robo-debt system used an automated process of data matching which is used to recover purported overpayments from Centrelink and former Centrelink recipients’.28 Specifically, it compared to pay as you go income data reported by the ATO29 against wage data reported to Centrelink, and determines benefit payment where there is a discrepancy between the two.30

Then, upon detection of a possible overpayment, robo-debt engaged in an automated process which con- cluded with a debt notice being rendered, which (under the legislation) the welfare recipient is required to disprove, not by virtue not of DHS’s investigations but as a result of the action or inaction of the welfare recipient under investigation.

There are various issues associated with the ADM robo-debt system.

The main issue was that the system was (apparently) wildly inaccurate, causing erroneous debts to be communicated to vulnerable people, and causing unwar ranted stress and strain to those who in actuality did not owe a debt at all.31

Further, scholars such as Terry Carney have noted that the manner in which the ADM system requires the welfare recipient to disprove the debt is an unlawful reversal of the onus of “because [DHS] is always responsible for ‘establishing’ the existence and size of supposed social security debts.”32

The robo-debt system serves as an example of the real-world impact which ADM can have, as well as an example of just the type of issue which may have been resolved far more equitably and simply if, for example, the aggrieved welfare recipient had some sort of as-of- right ability to reject to their data being utilised to make a decision about them by an ADM system akin to the off-ramp enshrined in the GDPR.

Ultimately, a recent class-action brought to challenge the validity of the robo-debt system was settled out of court (for $112M),33 meaning that we are yet to see how the Australian courts would have reacted to such a system.34

However, the fact that the law will need to grow and change as these ADM systems continue to prevail was recently judicially acknowledged in the nearby common law jurisdiction of Singapore in B2C2 Ltd v Quoine Pte Ltd (Quoine).35

ADM systems in practice: business

In Quoine,36 the Singaporean International Commercial Court considered the doctrine of contractual mistake within the context of a trading error made on an ADM cryptocurrency trading platform operated by Quoine.

Specifically, the court was asked to determine whether an ADM platform could enter a transaction that had a legally binding effect, and if so, how knowledge could be attributed to the ADM platform to ascertain whether such an agreement was in fact entered in mistake.

The alleged “mistake” in question was a trade initiated by B2C2 of its existing Ethereum cryptocurrency for Bitcoin which, due to a supposed error in the programmatic system, was traded at approximately 250 times the market rate at that time (to the benefit of B2C2).

While the court held, un-controversially, that ADM platforms could enter binding contractual relations, it is the latter part of its inquiry that is of most relevance within the context of ADM platforms.

In determining what knowledge could be attributed to the system, Thornley LJ held that with respect to relatively uncomplicated and rule-based “deterministic systems”, that is, ADM systems which follow clear and understandable pre-programmed rules, that the relevant knowledge should be that of the programmer at the time that they wrote the program.37 On this basis, he found in favour of the now-considerably-more-wealthy B2C2.

However, and problematically, he suggests that such a simple and common-sense approach would not neces- sarily translate with respect to more complicated ADM systems and that the legal system will be forced to develop as more complicated ADM systems arise:

. . . the law in relation to the way in which ascertainment of knowledge in cases where computers have replaced human actions is to be determined will, no doubt, develop as legal disputes arise as a result of such actions. This will particularly be the case where the computer in question is creating artificial intelligence and could therefore be said to have a mind of its own.38

For example, with an advanced ADM system it would not be suitable to refer back to the knowledge or intentions of the programmer in question as it was in Quoine, because the relevant intention at the time may be far surpassed by the “intention” of the automated system borne out of their original lines of code.

This is because, unlike rule-based deterministic systems which rely upon the application of pre-programmed rules, advanced automated systems can operate by inferential reasoning. That is, these systems operate by creating the very rules upon which they operate through a continual process of inference based on historical data inputted into and then generated by, the system.39

This process is termed machine learning,40 a method of programming synonymous with the rise of artificial intelligence and one in which the true nature of the ADM system changes and “evolves” with each inference.

In this instance, in much the same way as it may be impossible to understand an alien language, it may be impossible to understand the complicated and not- necessarily-human internal language of the complicated ADM system. It would be similarly impossible to ascertain the relevant knowledge of the ADM system.

This is an idea known in the technical community as the “black box problem” — expressed simply, the problem that “many of the computing systems programmed using Machine Learning are opaque: [and therefore] it is difficult to know why they do what they do or how they work”.41

Obviously, this creates a number of legal issues but most relevantly, it will likely pose significant issues with respect to privacy and data protection law — if we are not even sure how a system is operating, how can we know if it is creating outputs in a manner compliant with existing privacy principles?42 Further, how can existing legal principles understand, interpret and analyse this system so as to ensure it is utilised within the scope of the existing legal protection of individual privacy?

Australian legal response to the rise of the robots

Whilst the Australian legislature has had the oppor tunity to consider the enactment of express legislation to deal with the unique and complicated issues associated with ADM systems it has, to date, decided not to. We think that perhaps now the time is ripe that it ought to. Australian courts have shown themselves to be quite adept at the use of old forms in new areas. For example, in Thaler v Commissioner of Patents,43 the Federal Court determined patent ownership within the context of an extremely new and highly complicated AI program that it found was “invented” by an AI program:

In my view, Dr Thaler, as the owner and controller of DABUS, would own any inventions made by DABUS, when they came into his possession. In this case, Dr Thaler apparently obtained possession of the invention through and from DABUS. And as a consequence of his possession of the invention, combined with his ownership and control of DABUS, he prima facie obtained title to the invention. By deriving possession of the invention from DABUS, Dr Thaler prima facie derived title. In this respect, title can be derived from the inventor notwithstanding that it vests ab initio other than in the inventor. That is, there is no need for the inventor ever to have owned the invention, and there is no need for title to be derived by an assignment.44

However, despite the ability of the common law to apply old principles in these new contexts, direct legislative intervention may be required to provide protection against ADM systems. Pintarich shows that established legal orthodoxy is sometimes not agile enough to apply cohesively to this complex and new technology. Similarly, the “robo-debt” saga shows the pressing and real impact that these systems have, and poignantly Quoine contemplates the reality that the law must develop to meet the new and novel challenges of adapting ADM systems, particularly artificial intelligence systems.

It seems then that a good starting point would be for Australia to take Europe’s lead and adopt an explicit privacy “off-ramp” which permits a person to object to the processing of their data by an ADM system.

Whilst this does not solve all of the many and varied issues that this technology has and will create, it will be able to be utilised to help both businesses and individuals by providing a much-needed safeguard and by extension consistency and legal certainty.

Parts of this article relating to automated decision-making technologies, the Pintarich case and the “robo-debt” issue draw on an honours’ thesis entitled “Executive ‘Decisions’ in An Era of Automation: The Once Inconceivable Rapidly Becoming the Unexceptional” submitted by Joshua Charlton to the University of Wollongong in fulfillment of his LLB (Hons) degree.

Footnotes

1. C3PO, Star Wars II: Attack of the Clones.

2. Pintarich v Deputy Commissioner of Taxation (2018) 262 FCR 41; (2018) 108 ATR 31; [2018] FCAFC 79; BC201804205 at [47].

3. Administrative Review Council Automated Assistance in Administrative Decision Making Report No 46 (2004) p 57–63.

4. Australian Competition and Consumer Commission (ACCC) Digital Platforms Inquiry Final Report (June 2019) www.accc. gov.au/system/files/Digital%20platforms%20inquiry%20-% 20final%20report.pdf.

5. B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(I) 03.

6. D Harwell “A face-scanning algorithm increasingly decides whether you deserve the job” The Washington Post 6 November 2019 www.washingtonpost.com/technology/2019/10/22/ai- hiring-face-scanning-algorithm-increasingly-decides-whether- you-deserve-job/.

7. See for example B McGurk Data Profiling and Insurance Law 1st edn, Hart Publishing, 23 March 2019.

8. N T Lee, P Resnick and G Barton “Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms” Brookings 22 May 2019 www.brookings.edu/research/ algorithmic-bias-detection-and-mitigation-best-practices-and- policies-to-reduce-consumer-harms/.

9. T Scassa “Administrative Law and the Governance of Automated Decision-Making: A Critical Look at Canada’s Directive on Automated Decision-Making” (2021) 54(1) UBC Law Review.

10. B Mendoza, M Szollosi and T Leiman “Automated decision making and Australian Discrimination Law” [2021] 4 ANZCompuLaw Journal 93; J Gerards and F Z Borgesius “Protected Grounds and the System of Non-discrimination Law in the Context of Algorithmic Decision-making and Artificial Intelligence” (Draft, 2 November 2020) forthcoming in the Colorado Technology Law Journal.

11. Privacy International Data is Power: Profiling and Automated Decision-Making in GDPR (April 2017) p 2 https:// privacyinternational.org/report/1718/data-power-profiling-and- automated-decision-making-gdpr.

12. General Data Protection Regulation, Art 22.1.

13. Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data L 281/31

Member States shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.

14. District Court of Amsterdam Case C / 13/696010 / HA ZA 21-81; R English “Amsterdam Court orders reinstatement of Uber drivers dismissed by algorithm” UK Human Rights Blog

18 May 2021 https://ukhumanrightsblog.com/2021/05/18/ amsterdam-court-orders-reinstatement-of-uber-drivers-dismissed- by-algorithm/.

15. Above.

16. Privacy Act 1988 (Cth); Privacy and Personal Information Protection Act 1998 (NSW); Health Records and Information Privacy Act 2002 (NSW); Privacy and Data Protection Act 2014 (Vic); Information Privacy Act 2009 (Qld); Personal Informa- tion and Protection Act 2004 (Tas).

17. B Mendoza, M Szollosi and T Leiman, above n 10, at 10.

18. Australian Law Reform Commission For your Information: Australian Privacy Law and Practice Vol 1 Report 108 (May 2008) para 10.83.

19. Above, para 10.84.

20. As to the lawfulness of facial recognition technology, see for example R (on the application of Bridges) v Chief Constable of South Wales Police (Information Commissioner and others intervening) [2020] EWCA Civ 1058; J Fasman We see it all: Liberty and justice in an age of perpetual surveillance, 2021, Public Affairs;

21. Commonwealth Ombudsman, Automated decision-making better practice guide, www.ombudsman.gov.au/publications/better- practice-guides/automated-decision-guide.

22. Above n 4.

23. Above n 2.

24. Above n 2, at [110].

25. Pintarich v Deputy Commissioner of Taxation [2017] FCA 944; BC201708129.

26. For an exploration of the doctrine of functus officio see, eg, Sn Moloney “Finality of Administrative Decisions and Decisions of the Statutory Tribunal” (2010) 61 AIAL Forum 35, 37; see also R Orr and R Breise “Don’t think twice? Can administrative Decision Makers Change Their Mind?” (2002) 35 AIAL Forum 11; E Campbell “Revocation and Variation of Administrative Decision” (1996) 22(1) Monash University Law Review 30; Walter Construction Group v Fair Trading Administration Corp [2005] NSWCA 65; BC200501383.

27. Semunigus v Minister for Immigration and Multicultural Affairs [1999] FCA 422; BC9901855 at [19] affirmed by the Full Federal Court in Semunigus v Minister for Immigration and Multicultural Affairs (2000) 96 FCR 533; 60 ALD 383; [2000] FCA 240; BC200001115 at [11], [55] and [101].

28. Community Affairs References Committee, Senate Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative (2017) para 1.6.

29. PAYG data comprises employee income figures which are reported to the ATO under compulsory reporting requirements in ss 12 to 35 of the Taxation Administration Act 1953 (Cth).

30. Above n 28; L Macleod “Lessons learned about digital transformation and public administration: Centrelink’s online compliance intervention” (2017) 89 AIAL 59.

31. Eg S Medhora “Over 2000 people died after receiving Centrelink robo-debt notice, figures reveal” ABC News 18 February 2019 www.abc.net.au/triplej/programs/hack/2030-people-have-died- after-receiving-centrelink-robodebt-notice/10821272.

32. T Carney “Robo-debt illegality: The seven veils of failed guarantees of the rule of law?” (2018) 44(1) Alternative Law Journal 2.

33. Gordon Legal, Robodebt Class Action Settlement, https:// gordonlegal.com.au/robodebt-class-action/.

34. However, compare the Office of the Australian Information Commissioner decision in “WP” and Secretary to the Depart- ment of Home Affairs (Privacy) [2021] AICmr2.

35. Above n 5.

36. Above n 5, at [205] and [208]–[211].

37. Above.

38. Above n 35, at [206].

39. M Zalnieriute, L B Moses and G Williams “The Rule of Law and Automation of Government Decision-Making” (2019) 82(3) The Modern Law Review 425, 432.

40. Above.

41. C Zednik Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence https://arxiv. org/ftp/arxiv/papers/1903/1903.04361.pdf#:~:text=The%20Black% 20Box%20Problem%20is,problems%20in%20AI%20are% 20opaque.&text=Unlike%20their%20colleagues%20working% 20within,the%20relevant%20problems%20are%20solved; F M Alexandre The Legal Status of Artificially Intelligent Robots: Personhood, Taxation and Control (June 2017) https://ssrn. com/abstract=2985466; and L DL Carvalho “Spiritus Ex Machina: Addressing the Unique BEPS Issues of Autonomous Artificial Intelligence by Using ‘Personality’ and ‘Residence’” (2019) 47(5) INTERTAX 425; book review: A Legal Analysis of NGOs and European Civil Society by P Staszczyk Alphen aan den Rijn: Wolters Kluwer 2019 pp 425–443.

42. See for example S Zuboff The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Public Affairs, 2020

43. Thaler v Commissioner of Patents (2021) 160 IPR 72; [2021] FCA 879; BC202106774.

44. Above, at [189].

This is commentary published by Colin Biggers & Paisley for general information purposes only. This should not be relied on as specific advice. You should seek your own legal and other advice for any question, or for any specific situation or proposal, before making any final decision. The content also is subject to change. A person listed may not be admitted as a lawyer in all States and Territories. © Colin Biggers & Paisley, Australia 2024.

Related Articles