AI is like the powerful character in an action movie who looks invincible until they turn around, revealing a fatal spear embedded in their back. The spear in AI's back is the American legal system, which has been issuing free passes to tech companies and platforms for decades on the idea that limiting innovation will hurt economic growth, so we'd best let tech companies run with few restrictions.
The issuance of free passes to Tech monopolies / cartels and platforms may be ending. Letting Big Tech run with few restrictions has led to the smothering of innovation as tech monopolies do what every monopoly excels at, which is buy up potential competitors, suppress competition, pursue regulatory capture via lobbying and spend freely on deceptive PR.
Now anti-trust regulators are finally looking at the uncompetitive wastelands created by Big Tech and recognizing the union-busting tactics of quasi-monopolies like Starbucks and Amazon. The bloom might be off the Big Tech / Monopoly rose.
Enter AI, which offers the thrilling prospect of trillions of dollars in additional profits for purveyors of AI and all those companies which use their AI tools.
The American legal system deals with new technologies much as a reptile digests a meal--slowly. I get email from readers about defending the Constitution, something we all support. I am not an attorney, but my impression of Constitutional law is that it is a tediously complex thicket of case law that must be carefully picked through before we can even begin to understand exactly what we're defending: every issue anyone might be concerned about has already accumulated an immense load of rulings and arguments.
This is American jurisprudence: advocacy goes to trial and ruling are issued, some as rulings that will pertain to all future cases and some that will not. The law advances in new fields such as AI as positions are argued before judges / juries and then reviewed by higher courts as losers appeal judgments / rulings.
A great many things we might think are novel have long been settled. Isn't the Selective Service Act a form of involuntary servitude? Nope, that's been settled long ago. The government's right to draft you to fight in a war of choice is unquestionably the law of the land.
AI has certain novel features which have yet to be decided by the processes of advocacy, rulings and appeals. In general, corporations selling / giving away AI tools are claiming these tools incur no liability to the issuers of the tools because they're akin to software that, for example, adds HTML coding to plain text: a tool that performs a process.
This strikes me as incomplete. It seems to me that AI, by its very name and nature, is making implicit claims of utility far beyond mere processing of data or text: AI is called AI because it is adding intellectual value to data or text.
All the disclaimers in the world cannot dissolve this implicit claim of utility that adds value. Since I'm not an attorney, I'm not able to put this in proper legal terms; I am using the terminology of philosophy. But the law is a system based on philosophic principles, and so the language of philosophy plays a key role in broadly applicable legal rulings.
Now let's consider a real-world example. A patient receives a mid-diagnosis and suffers as a direct result of the mis-diagnosis. In our system of law, somebody or some entity is liable for the consequences of the error, and must pay restitution to those harmed by the error.
As fact-finding proceeds, it turns out an AI tool was used in the initial scanning of the patient's data. The company that created the AI tool will naturally claim that the tool was intended only to be used under the supervision of a human professional, and there were no claims made as to the accuracy of the AI tool's output.
This is a specious argument, as the clear intent of the AI tool is to replace human expertise as a means of lowering the costs of diagnosis by accelerating the process and increasing the accuracy of the diagnosis.
Clearly, the tool was designed for exactly this purpose, and therefore deficiencies in its performance that contributed to the mis-diagnosis--for example, the fact that the AI tool rated the diagnostic result with a high probability of accuracy--are the responsibility of the company that issued the AI tool.
Should the court find the AI company 1% liable for the misdiagnosis, the principle of joint and several liability means the monetary settlement falls on whichever parties can pay the settlement. Should the other parties found liable be unable to pay a $10 million settlement, then the AI company might end up paying $9 million of the $10 million settlement, despite their apparently limited liability.
Off the top of my head, I can foresee dozens of similar examples in which an AI tool can be found partially liable for misrepresentations, errors of omission, unauthorized use of confidential intellectual property, and so on, in what can easily become an endless profusion of liability claims.
If the bloom is off the rose of Big Tech, the likelihood of a court assigning liability to those issuing AI tools increases proportionately. If the ruling is upheld by an appeals court, it will generally enter case law and become the basis for similar lawsuits assigning liability to those entities issuing AI tools.
That real harm will result from the use of AI tools is a given. The idea that those issuing these tools should be given a free pass because "we really didn't mean that you could use the tools to reduce human labor and increase accuracy" does not pass the sniff test, nor will it negate advocacy claiming that these tools implicitly make claims about utility that incur liability.
Use an AI tool, get sued. The Wild West of AI's claims of zero liability will soon enter the meat grinder of jurisprudence, and implicit claims of utility will be more than enough to incur liability in a court of law--as they should.
The legal spear in AI's back could prove fatal. A 1% error rate and 1% liability will add up fast.
My recent books:
Disclosure: As an Amazon Associate I earn from qualifying purchases originated via links to Amazon products on this site.
Self-Reliance in the 21st Century print $18, (Kindle $8.95, audiobook $13.08 (96 pages, 2022) Read the first chapter for free (PDF)
The Asian Heroine Who Seduced Me (Novel) print $10.95, Kindle $6.95 Read an excerpt for free (PDF)
When You Can't Go On: Burnout, Reckoning and Renewal $18 print, $8.95 Kindle ebook; audiobook Read the first section for free (PDF)
Global Crisis, National Renewal: A (Revolutionary) Grand Strategy for the United States (Kindle $9.95, print $24, audiobook) Read Chapter One for free (PDF).
A Hacker's Teleology: Sharing the Wealth of Our Shrinking Planet (Kindle $8.95, print $20, audiobook $17.46) Read the first section for free (PDF).
Will You Be Richer or Poorer?: Profit, Power, and AI in a Traumatized World
(Kindle $5, print $10, audiobook) Read the first section for free (PDF).
The Adventures of the Consulting Philosopher: The Disappearance of Drake (Novel) $4.95 Kindle, $10.95 print); read the first chapters for free (PDF)
Money and Work Unchained $6.95 Kindle, $15 print) Read the first section for free
Become a $3/month patron of my work via patreon.com.
Subscribe to my Substack for free
NOTE: Contributions/subscriptions are acknowledged in the order received. Your name and email remain confidential and will not be given to any other individual, company or agency.
Thank you, John K. ($10/month), for your magnificantly generous subscription to this site -- I am greatly honored by your support and readership. |
Thank you, Mark B. ($50), for your marvelously generous subscription to this site -- I am greatly honored by your support and readership. |
|
Thank you, Paul G. ($7/month), for your splendidly generous subscription to this site -- I am greatly honored by your support and readership. |
Thank you, James D. ($25), for your superbly generous subscription to this site -- I am greatly honored by your support and readership. |