Large Language Models (LLMs) for Lending and Mortgage
Use Cases, Architecture, Costs
ScienceSoft relies on 36 years of experience in artificial intelligence and 20 years in lending software engineering to deliver secure and compliant LLM solutions tailored to the unique operations of lending companies.
Large Language Models in Lending: Key Aspects
Large language models (LLMs) for lending and mortgage are rapidly gaining popularity due to their ability to efficiently automate up to 90% of data processing and borrower servicing tasks that traditionally required extensive human involvement.
Using LLMs, lenders can instantly capture, consolidate, and summarize massive amounts of unstructured risk-relevant data from corporate and external sources. This accelerates underwriting and helps close loans up to 2.5x quicker while bringing a 20–30% productivity uplift. LLM-produced risk predictions allow for fairer and more accurate borrower profiling, contributing to, on average, a 5–35% increase in written loans and a 20% reduction in defaults.
Similarly, LLMs can review voluminous sectoral compliance documents like Fannie Mae guidelines in seconds, which lets lending teams retrieve case-relevant clauses faster and with substantially less effort.
Lastly, LLM-powered AI assistants outperform traditional chatbots in response precision, conversational convenience, and personalization capabilities. They help introduce higher-quality borrower self-service, leading to increased client retention and revenue growth.
The Market of LLMs for Loans and Mortgage Lending
The market of generative AI in financial services is projected to grow at a CAGR of 28.1% and reach $12.14 billion by 2033. The segment of LLM-supported natural language processing (NLP) is anticipated to hold the largest market share and witness a corresponding increase.
An early adopter of intelligent automation, the lending industry is trailblazing large language model solutions, with over 70% of mortgage lenders already using or planning to integrate GenAI and LLMs in their workflows. The growing demand for LLMs in lending is driven by the need to enhance borrower experiences, streamline and speed up data-intensive operations throughout the loan cycle, and simplify compliance with evolving sectoral regulations.
LLM Use Cases in Lending and Mortgage
LLM Architecture
Setting up a lending LLM assistant is not as easy as just picking the right AI model and integrating it with the existing IT systems. To bring value, pretrained LLMs need to adapt to the lender’s unique business data, and the chosen approach to model enhancement will largely impact the solution’s final capabilities and TCO. In most cases, applying cost-effective retrieval augmented generation (RAG) techniques is enough to acquaint an LLM with the organization’s contextual data. So, we usually first assess RAG’s feasibility for our client’s needs and consider costly model retraining only when the intended capabilities demand deeper model customization.
Below, ScienceSoft’s AI consultants share a sample architecture of RAG-enabled lending LLM solutions we deliver, describing their key components and data flows.
- Users interact with pretrained LLMs (GPT-4, Claude, LlaMA, etc.) via role-specific LLM app interfaces (e.g., for loan officers, underwriters, borrowers). Any natural-language request (prompt) from a user is instantly routed for processing to the orchestrator.
- The orchestrator, the core component of the app’s back end, is responsible for executing the prompt processing logic. ScienceSoft powers the orchestration layer by LLMOps, a part of DevOps that caters specifically to the continuous integration and monitoring needs of LLM solutions. The LLMOps environment ensures smooth and controllable communication between the solution’s parts.
- The orchestrator aggregates prompt-relevant structured data (borrower names, loan application dates, loan limits, APRs, etc.) directly from the lender’s data storage. Simultaneously, it queries the RAG embedding model to retrieve unstructured contexts like borrower solvency documents, title documents, debtor communication histories, and lending compliance guidelines. To enable effective search across unstructured data, it first needs to be converted to the searchable vector format. ScienceSoft usually implements a metadata vectorization pipeline that automates data capture, cleansing, chunking, conversion to vectors, and storage in a vector database.
- The results of lexicographic search (from structured data) and semantic search (from vectors) are merged, rescored, and combined into a single optimal output by a reranking model. This improves the accuracy and comprehensiveness of contextual data fed to LLMs.
- The orchestrator combines the prompt with the reranked set of contextual data and pulls them into a pre-engineered prompt template. ScienceSoft builds custom templates for common lending-specific inquiries, taking into account the prompt length (token) limits set by LLM providers. The enriched prompt is then routed to the LLM.
- The pretrained LLM processes the prompt and returns the response to the orchestrator, where inference logging and validation (e.g., vetting against bias and toxicity) occur. The valid response is then routed to the user.
- The user provides feedback on the relevance and precision of the LLM output. The feedback is used for LLM fine-tuning to provide higher-quality responses (e.g., through techniques like reinforcement learning with human feedback).
How Lending and Mortgage Companies Benefit From LLMs
Rocket Mortgage Relies on LLMs to Close More Loans Faster
Rocket Mortgage, the US’ largest retail mortgage lender, developed an LLM-powered platform called Rocket Logic to increase the efficiency and accuracy of its loan origination processes. The platform’s LLM algorithms automatically validate borrower documents, extract the required information, and summarize it for further processing.
With Anthropic’s best-in-class LLMs at its core and access to contextual insights from 10+ petabytes of Rocket Mortgage’s proprietary data, the solution automatically processes 70% of borrower documents and handles 90% of data extractions, saving the company’s underwriters 9,000+ hours of manual work monthly. Using LLM-enabled automation, Rocket Mortgage now manages to close loans 2.5x faster than the industry average.
First Financial Bank Uses LLMs to Elevate Borrower Experiences
First Financial Bank, the US’ fifth oldest national bank, launched an LLM-powered digital assistant, Gabby, to efficiently resolve customer queries and optimize the financial teams’ workloads. The solution, built on a finance-specific LLM KAI-GPT by Kasisto and integrated with the communication platform LinkLive, automatically processes omnichannel borrower inquiries and responds to them in real time.
First Financial Bank’s findings show that 90% of Gabby’s inferences are “contained,” i.e., borrowers don’t need to contact the bank’s reps for further assistance. LLM-enabled borrower support and personalized suggestions on the best-fitting lending products brought the bank a 35% increase in mortgages, 28% — in personal loans, and 5% — in vehicle loans.
How to Guarantee the Success of Lending and Mortgage LLM Solutions
Drawing on decades of experience in AI, ScienceSoft’s consultants suggest that the following steps have a major impact on the success of lending LLM solutions:
Give LLMs access to comprehensive context
When LLMs lack access to lender-specific contextual data, they fail to align their responses with the lender’s audience and fair service policies and may even fabricate misleading and senseless answers (i.e., “hallucinate”). Applying RAG, fine-tuning, or pretraining methods to equip LLMs with the lender’s proprietary knowledge and up-to-date servicing rules is critical to ensuring accurate, relevant, and compliant model inferences.
Achieve full explainability of LLM logic
A lender must understand the rationale behind LLM suggestions. Otherwise, it may be hard or impossible to prove unbiased and compliant decisions to borrowers and regulators. ScienceSoft uses techniques like LIME and SHAP to achieve the transparency, traceability, and easy interpretability of LLM logic. Additionally, we incorporate requirements for explainability into prompt templates so that an LLM app always backs its inferences with source citations.
Establish robust security of sensitive data
Insufficient protection of an LLM app and the data it operates poses the risks of illegitimate access to and misuse of a lender’s sensitive business info. Applying secure architecting and coding practices, privacy-preserving prompt tuning (RAPT) methods, role-based access controls, multi-factor authentication, data encryption, intelligent UBA, and advanced network protection tools helps safeguard LLM software from known and emerging threats.
Costs of Implementing a Lending LLM Solution
From ScienceSoft’s experience, implementing an LLM solution for lending may cost from $250,000 to $1,000,000+, depending on software complexity, the approach to LLM customization, architectural and tech stack choices, as well as security and compliance requirements.
Here are our sample estimates for various scenarios:
$250,000–$350,000
An LLM chatbot that handles borrower communication. RAG is applied to “upskill” pretrained LLMs on specialized lending knowledge and company data.
$300,000–$500,000+
An LLM copilot for lending specialists (loan officers, loan servicing specialists, compliance managers, etc.). The underlying LLMs are adapted to the lender’s specifics using RAG and, if needed, PEFT.
$1,000,000+
An LLM-powered assistant for lending professionals, retrained and fine-tuned to reason on highly specific service aspects (e.g., mortgage restructuring) or complex lending models (e.g., syndicated lending).
Lending LLM Consulting and Implementation With ScienceSoft
In AI development since 1989 and in lending IT since 2005, ScienceSoft provides full-cycle LLM services to help consumer, commercial, and mortgage lenders successfully implement tailored LLM solutions.
Lending LLM consulting
We design the features, architecture, and tech stack for your LLM solution, taking into account your specific automation needs and relevant compliance requirements. You get advice on the secure and pragmatic approach to LLM implementation and receive a detailed project plan with realistic cost and time estimates.
Lending LLM implementation
We develop the server side and user interfaces for your LLM app, integrate the solution with the selected LLM(s), and enhance the model(s) with your business-specific data. Our team conducts rigorous app testing and can provide continuous app maintenance. You get an MVP of your lending LLM software in 1–4 months.