The people's voice of reason

Are there other areas where AI has created cause for concern?

I believe you just answered a question on AI (artificial intelligence) relative to the legal field in December 2025. Are there other areas where AI has created cause for concern?

AI, as I said in December is a hot topic. Technology has the ability to be so useful to humans. But three things often happen. Number one, its limitation may not be readily discernable, two- it may be used in ways that “machine learning” has not yet achieved and number three, there are too many bad people out there that figure out how to misuse the technology.

The items I discuss here are not meant to create panic or to imply that AI will do away with jobs. Everything that I discuss requires human input and human oversight. You will note from several of the examples that I use that human scrutiny would have often prevented AI generated errors by not taking in faith what AI reports. In December, I had advocated for one word- CAUTION!

AI’s flaws have become evident for instance in law, medicine and research among many other uses. AI has misidentified structures in imaging modalities, placing them in wrong locations. I have seen many tools over decades that attempted to provide the outlining of internal body structures. The original tools used libraries which were full of physician expert identified and contoured normal structures. But all of these products failed in a way that medical professionals needing to use these as tools lost faith in the years of promises with very slow progress. Then came AI and the accuracy significantly increased. But to a fault, in that some medical professionals needing contoured organs revealed in medical imaging took too much faith in the tools. Fortunately, the faith in tools was usually greater in less important organs and human attention played a larger part in the identification and contouring of the organs more relevant to normal or diseased organs.

Johnson and Johnson, subcompany Accelerant touted in 2021an AI based software for its TruDi Navigation system for ENT surgeries. Over the next four years, there were at least ten patients injured mostly because the navigational system misidentified appropriate locations inside ENT connected areas. The base of one patient’s skull was punctured, another had a cerebrospinal fluid leak caused by the device, and in two patients a major artery was damaged causing a stroke in each of the two patients. While the two stroke patients both filed suit asserting the cause from AI use, the investigations have not yet confirmed or ruled that out. In addition to this device, there are over 1,300 other devices. It was reported that one AI heart monitor device misidentified abnormal heartbeats and a diagnostic ultrasound device also using AI software misidentified body parts of an unborn child.

There are reports of many individuals seeking to self-diagnose by utilizing a chatbot. The braver souls learn from the chat bot and use the information to either challenge their physician or engage more deeply in the discussion of their own care. I learned long ago that people need health care advocates whether it is the patient or an individual that will champion their health.

In December, of course I had discussed AI in law and pointed out the many cases now referred to as hallucinations that never existed and were made up by AI. Hallucinations describe AI made up cases to support the legal theories of a lawyer whether they be for the plaintiff, the prosecution or the defendant either in civil or criminal trials or appeals. Several attorneys have already received harsh tongue lashings from the bench to include sanctions that for the most part are intended to hurt financially. The Alabama Supreme Court has become involved in the oversight of this technology when misuse or misapplication has reared its ugly head.

Finally, one more area of AI use that I pretty much bet each of you have engaged in has to do with the AI overview summary that you probably review each time you “google” a subject. I am as guilty as the rest of you. Within this article I have discussed AI hallucinations which in the legal world are made up legal cases. I would say that anything that you research has an opportunity to include one or more hallucinations. I think its okay that you read the AI overview or summary but don’t take those results as the gospel. I would advocate looking further down at the results and read text from reputable sourced, non AI generated websites. Read them for yourself and not just one source at the top because you need to have a good understanding of what you seek to know and not from a single source.

In connection with the above paragraph, came a warning via an article available online from Wired magazine. Looking back to the AI generated overview, look to see if hyperlinks have been generated especially if it is related to a product or service you wish to pay for. The Wired magazine article suggests that AI may generate a scam hyperlink within the article. AI gathers information from multiple web sites, not looking to see nor understanding if the link to a company is legitimate or not. Those that may seek to scam you may have multiple sites on the internet with scam links to hopefully take your money. After all, if you get an email for instance that claims your subscription to an internet security service has expired and gives you an easy link through which you can pay do you blindly follow that link and pay? If I question the need to renew a subscription, I will find the legitimate web site and sign in to my account to determine if my subscription has really expired. Usually, I find it has not expired. But if it has expired, I can use a link in the recognized legitimate web site to take care of business. When AI is gathering information for its overview it will tend to gather the most information even if that information is from unreputable web sites. There are reports of people calling telephone numbers linked to the overview and having the individual on the other end answer as if they are in customer service for that company. They will gladly take your information and payment. Your money and information may now be in the hands of a scammer.

AI can be a good tool, but use the common sense that God gave you. The Bible contains the gospel truth, don’t take AI blindly as the gospel.

This article is informative only and not meant to be all inclusive. Additionally this article does not serve as legal advice to the reader and does not constitute an attorney- client relationship. The reader should seek counsel from their attorney should any questions exist.

"No representation is made that the quality of legal services performed is greater than the quality of legal services performed by other lawyers."

THE VIEWS OF SUBMITTED EDITORIALS MAY NOT BE THE EXPRESS VIEWS OF THE ALABAMA GAZETTE.

 
 

Reader Comments(0)