Web Analytics Made Easy - Statcounter
Skip to main content
Article Last Updated 03/18/2026

Article Reviewed by a licensed insurance professional: Sam Meenasian (CA dept of insurance license #0F75955).

Estimated reading time: 7 minutes

Insurance has used tech for a long time. But over the last 5 years, AI has been cranked into overdrive.

 Ok, so things have definitely changed.

Chatbots could generate quotes in seconds. Algorithms could support underwriting decisions, but underwriters remain legally responsible for final eligibility, classification, and pricing decisions. In many states, fully autonomous underwriting without human accountability is not permitted for commercial risks.

Human oversight is still very valuable. You need to have a connection and a real human understanding. Also, not to mention, Generative AI tools may produce confident but incorrect answers (‘hallucinations’ in AI safety terms). Traditional predictive models don’t ‘hallucinate,’ but they could still be wrong due to bad inputs, misclassification, outdated data, or use outside the model’s intended scope—which is why human validation matters in insurance.

A Balancing Act

AI does have some legitimate strengths. It can access the loss history in milliseconds. It can read a business description and make an educated guess as to the policy you are likely to need. It can keep records organized so you are not frantically searching through piles of paperwork at 2 am. All of that efficiency feels fantastic until you remember that efficiency and comprehension are not the same.

A good agent is valuable in areas AI cannot reach, like:

  • Deciphering what you really do as opposed to what a dropdown menu says you do
  • Talking to an underwriter who is being unnecessarily rigid
  • Spotting a coverage gap, the algorithm missed because it did not fit the pattern it was looking for

AI tools could generate explanations, but they don’t have authority, accountability, or a legal duty to advocate for you with a carrier. A qualified agent/broker can negotiate classification, underwriting intent, and documentation to support a fair outcome.

AI makes the industry run faster, but it does not have the instincts, the lived experience, the walk through the shop, the human intuition to actually know what your business is about. 

The Hidden Costs of AI Mistakes

When automation gets your coverage wrong, you’re looking at costs like:

  • Denied claims 
  • Fines and penalties 
  • Legal fees if an injured worker or customer sues because insurance won’t cover the incident
  • Back premiums and adjustments when the insurer realizes you were underinsured
  • Lost time dealing with the mess 

I had a client who purchased his general liability coverage through an online platform. One of the questions the automated system asked was if he had ever done any electrical work. He selected the “occasionally” box since he sometimes installed light fixtures. The algorithm coded him as low-risk for handyman electrical work and offered him a low premium.

Then he bid on and won a larger electrical project, and there was a fire. He filed a claim, and the carrier denied it. The policy language specifically excluded electrical work other than the most basic fixture installations. The platform’s algorithm never asked the follow-up questions to identify that nuance. Had to settle with the property owner for over $200K.

Hallucination Horror Stories

One of the scariest parts about AI and insurance is that it hallucinates. Literally hallucinates. It spits out confident, assured stuff that is dead wrong. Like you saying something right and completely missing the mark. But an AI will do that. And you will have to correct it. Or everything after that will go to hell in a handbasket.

Remember that landscaping client who got a general liability quote from a chatbot? Well, his large-scale tree-trimming job was somehow categorized as “light office duties.” So the premium that the system spat back at him looked super low. Too low. And…completely incorrect. So when an accident occurred, his claim was denied. It turns out the work was misclassified.

I know another contractor who described his work as “occasional height work” while doing small home repairs. The AI interpreted that phrasing too literally and classified the job as lower risk. A few months later, he suffered a fall at a multi‑story job site—and the carrier denied the claim. Many contractor programs have specific eligibility rules, exclusions, or endorsements for roofing.

It’s because of a fundamental weakness in AI. It doesn’t really understand context in the same way a good agent can uncover over the course of a conversation. Plus, AI has no real-world experience.

The Soft-Skill Moments That AI Will Never Replace

It’s been more than 20 years in the insurance trenches, and I am still enthralled by those magic moments. When the digital channels stop, the real person is doing his or her job. A client walks through the door or picks up the phone to tell you about the risk or need he or she knows about, and yet, despite carefully crafted algorithms and detailed underwriting questions, there is always one more, the thing no one asked you about, and yet it is one you, the agent, “just know” is there.

OK, so it’s time for a policy review, and the client makes an appointment to meet with you. An insurance robot will dutifully ask all of the prescreening questions and transcribe the business owner’s responses as they are typed or tapped at your screen. The policy you are reviewing has had a “minor” accident in the past 6 months. The client responds to the questions of this accident, which, upon review, runs through an algorithm to adjust the premium based on your answer. You have a face-to-face appointment or phone call. As the client reviews the accident, you read his or her body language, hear a slight hesitation in the voice, or see an avoidance of eye contact or “umms” when describing this event. Now you are asking questions such as: “That almost accident with the forklift, let’s talk about that,” and you discover it was an inexperienced new employee. The underwriter will use your inputs to find the correct premium, but you may have identified the training gap that could save a life down the road.

Let’s say you decide to visit a client and walk through the warehouse or office unannounced. He or she has completed an online risk questionnaire and self-described the business as “light manufacturing”. No underwriters or algorithms are watching a client hang a utility cord from the ceiling to safety bolt an empty pallet rack together, stack 10,000 lbs of weight on a pallet rack designed to carry only 5,000 lbs, or have an employee manually lift a pallet full of heavy machine parts. This is what a human agent will notice. These are the types of underwriting enhancements you can submit to get properly rated, yet still non-substandard coverage that most algorithms and digital tools will not see.

It’s 2 a.m., and a client calls in panic because a fire at his or her business has just started. A robot will have a claims portal with fields to fill in and documentation to upload or attach. A client does not need an automated claims portal at 2 a.m., nor will he or she want to have to make document decisions or click “yes” or “no” under fire conditions to get the needed business coverage. The client needs to hear a familiar voice say, “I’ve already called the adjuster, and I am on my way to meet you at your location.” These are human moments when an agent has a client’s back and provides a safety net in times of great stress and vulnerability, and they are what build relationships.

What the Next Decade Might Look Like

Expect more data-driven underwriting at renewal and more frequent exposure-based adjustments (audits, endorsements, and underwriting changes) rather than true ‘real-time’ repricing for most commercial policies.

But for the gray areas and advocacy, that will probably be left to us, humans.

The Ethical Issues Behind AI in Insurance — and the Regulations That Could Keep It in Check

But there’s a tsunami of ethical concerns that simply didn’t exist when underwriting was 99% a human endeavor. 

  • The biggest issue is transparency. When an algorithm decides a premium, most customers have no idea why that decision was made. That lack of visibility creates the risk of unfair outcomes.
  • Privacy also comes into play. Many insurance algorithms gather data far outside traditional underwriting.

State insurance departments and federal agencies are already exploring rules that would require carriers to disclose when AI is involved in decision-making and to explain how those decisions were reached. Some proposals would go further to require insurers to audit their algorithms for bias before deployment.

Why Insurance Still Requires Licensed Human Judgment

Insurance is not a product—it is a legal contract governed by state law, interpreted through policy language, exclusions, endorsements, and precedent.

A licensed insurance professional provides value where AI fundamentally cannot:

  • Interpreting ambiguous business operations that do not fit ISO class codes cleanly.
  • Explaining how endorsements materially alter coverage—not just price.
  • Advocating with underwriters when a risk is misclassified or over-penalized.
  • Identifying silent exclusions that only surface during claims.
  • Ensuring compliance with state-specific insurance statutes and carrier guidelines.

Daniel Smith

Daniel Smith is a New York attorney and legal writer with experience on both sides of insurance and coverage disputes. His background in litigation informs a practical, business-focused perspective on risk, liability, and the insurance issues companies encounter in real operations.