This Title All WIREs
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Impact Factor: 7.250

Designing and deploying insurance recommender systems using machine learning

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Recommender systems have become extremely important to various types of industries where customer interaction and feedback is paramount to the success of the business. For companies that face changes that arise with ever‐growing markets, providing product recommendations to new and existing customers is a challenge. Our goal is to give our customers personalized recommendations based on what other similar people with similar portfolios have, in order to make sure they are adequately covered for their needs. Our system uses customer characteristics in addition to customer portfolio data. Since the number of possible recommendable products is relatively small, compared to other recommender domains, and missing data is relatively frequent, we chose to use Bayesian Networks for modeling our systems. We also present a deep‐learning‐based approach to provide recommendations to prospects (potential customers) where only external marketing data is available at the time of prediction. This article is categorized under: Application Areas > Industry Specific Applications Algorithmic Development > Structure Discovery Algorithmic Development > Bayesian Models Technologies > Machine Learning
Schema of our deployed recommender System (APEX is the agent system/interface, SSO stands for Service and Sales Operations)
[ Normal View | Magnified View ]
Microservices based server‐less deployment architecture
[ Normal View | Magnified View ]
Illustration of the Deep Learning Architectures: baseline, 4HL, and 6HL (LeNail, )
[ Normal View | Magnified View ]
This illustrates the structure described in section 6.3.1. For example let us assume T1, T2, T3 are targets and we choose to pick the top‐2 internal features for each target (i.e., nodes F1, …, F4). For each internal feature we choose to pick the top‐2 external features and add them to the BN (i.e., nodes )
[ Normal View | Magnified View ]
Performance of targets when scored k‐at‐a‐time using our algorithm
[ Normal View | Magnified View ]
Comparison between IMC and our algorithm scoring all the targets at once with missing features
[ Normal View | Magnified View ]
The structure described in Section 5.1.6. For example let us assume {T1,T2,T3} are targets and we choose to pick the top‐2 features for each target. Notice how we are duplicating the node F1 for {T1,T3} and F2 for {T1,T2}
[ Normal View | Magnified View ]
(a) Sample structure among targets, (b) Naive Bayes graph connecting features and target
[ Normal View | Magnified View ]
Conversion Across All States versus Baseline
[ Normal View | Magnified View ]
Example of the property network for the state of OH. Complexity: 50 nodes, 122 arcs, avg. of 2.44 parents/children per node
[ Normal View | Magnified View ]
Example of the auto network for the state of OH. Complexity: 33 nodes, 89 arcs, avg. of 2.69 parents/children per node
[ Normal View | Magnified View ]

Browse by Topic

Technologies > Machine Learning
Algorithmic Development > Bayesian Models
Algorithmic Development > Structure Discovery

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts