Learn how to build an MCP client with Spring Boot AI using simple, step-by-step instructions. This beginner-friendly guide covers setup, configuration, and testing for seamless AI integration.
1. Introduction
Welcome to the third installment of our Model Context Protocol (MCP) series! In our first blog post, we introduced the fundamental concepts of MCP and explained how it helps AI models connect with external data sources more efficiently. In our second blog post, we showed you how to build MCP servers using Spring Boot AI, creating specialized servers for MongoDB and Confluence Cloud.
In this post, we’ll build an MCP client application using Spring Boot AI. This client will seamlessly integrate with our previously built MCP servers, allowing an AI model to access and utilize data from multiple sources through a unified interface.
2. How to Build MCP Client with Spring Boot
Building an MCP client with Spring Boot AI involves these key steps:
- Set up your project structure with the necessary dependencies in your pom.xml file
- Configure MCP servers in your application properties, specifying how to connect to each server
- Create a ChatClient configuration to register MCP tools with your AI model
- Build a controller to expose an endpoint for interacting with your AI model
- Test your implementation with queries that require accessing your data sources
Let’s first understand some core concepts before diving into the implementation.
3. Core Concepts in MCP Client Development
Understanding these foundational concepts will make it easier to implement and troubleshoot your MCP client.
3.1. The ChatClient and Its Role
The ChatClient
is the central component that connects your application to the AI model. In an MCP implementation, it:
- Registers tools with the model: The client informs the AI model about available tools and their capabilities
- Routes tool calls: When the model decides to use a tool, the ChatClient forwards the request to the appropriate handler
- Processes responses: It handles the tool’s response and incorporates it into the model’s context
Think of the ChatClient as an orchestrator that manages communication between your application, the AI model, and your external tools.
3.2. Tool Registration and Discovery
When your MCP client starts:
- It discovers available MCP servers based on the configuration specified in
application.properties or YAML. - Each server advertises its available tools, including descriptions and parameter schemas
- These tools are registered with the ChatClient
- The ChatClient makes these tools available to the AI model
This discovery process happens on the MCP client application startup.
3.3. Tool Call Flow
When a user sends a prompt that requires accessing external data:
- The AI model analyzes the prompt and decides which tool to call
- It generates a tool call with the appropriate parameters
- The ChatClient intercepts this call and forwards it to the MCP client
- The MCP client identifies which server provides the tool and forwards the call
- The server processes the request and returns the result
- The result is sent back to the AI model, which incorporates it into its response
- The final response is returned to the user
This flow enables the AI to access external data sources without exposing them directly to the model.
3.4. MCP Client Connection Types
Spring AI supports two types of MCP client connections:
- STDIO Connection: Launches MCP servers as separate processes and communicates via standard input/output
- Network Connection: Connects to MCP servers running at specified network locations (hosts and ports)
For development and testing, STDIO is often simpler, while network connections are more suitable for production deployments.
4. Building an MCP Client with Spring Boot AI
Now that we understand the core concepts, let’s implement an MCP client that can communicate with our MongoDB and Confluence MCP servers. Moreover, we can configure any MCP server in a similar way.
Step 1: Setting Up Project Dependencies
First, let’s set up our Spring Boot project with the necessary dependencies in our pom.xml
:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-mcp-client-spring-boot-starter</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>${spring-ai.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
pom.xmlThese dependencies provide:
- spring-boot-starter-web: Basic web application functionality for creating REST endpoints
- spring-ai-openai-spring-boot-starter: Integration with OpenAI-compatible APIs for accessing AI models
- spring-ai-mcp-client-spring-boot-starter: Core MCP client functionality for connecting to MCP servers
The dependencyManagement
section uses Spring AI’s Bill of Materials (BOM) to ensure compatibility between Spring AI components.
Step 2: Configuring MCP Servers
Next, we’ll configure our application to connect to the MongoDB and Confluence MCP servers we built in the previous tutorial. Add the following to your application.yaml
file:
logging:
level:
io:
modelcontextprotocol:
client: DEBUG
spec: DEBUG
spring:
application:
name: spring-boot-ai-mcp-client
ai:
openai:
api-key: <openrouter-api-key>
base-url: 'https://openrouter.ai/api'
chat:
options:
model: 'google/gemini-2.0-flash-lite-preview-02-05:free'
mcp:
client:
stdio:
connections:
mongo-mcp-server:
command: java
args:
- '-jar'
- <path-to-mongo-mcp-server-jar-file>
env:
MONGO_HOST: localhost
MONGO_PORT: '27017'
confluence-mcp-server:
command: java
args:
- '-jar'
- <path-to-confluence-mcp-server-jar-file>
env:
CONFLUENCE_AUTH_EMAIL: '${CONFLUENCE_AUTH_EMAIL}'
CONFLUENCE_AUTH_API_TOKEN: '${CONFLUENCE_AUTH_API_TOKEN}'
CONFLUENCE_BASE_URL: '${CONFLUENCE_BASE_URL}'
application.yamlThis configuration:
- Sets DEBUG logging for MCP components to help with troubleshooting
- Configures OpenRouter as our AI provider (which offers free access to various models)
- Defines two MCP server connections using the STDIO connector:
- mongo-mcp-server: Our MongoDB MCP server that connects to a local MongoDB instance
- confluence-mcp-server: Our Confluence MCP server that connects to Confluence Cloud
Each server configuration specifies:
- The command to launch the server (Java in this case)
- Arguments to pass (the path to the JAR file)
- Environment variables needed by the server
The MongoDB server needs the host and port for your MongoDB instance, while the Confluence server requires authentication details for your Confluence Cloud account.
Remember to replace placeholders (like <openrouter-api-key> and <path-to-…>) with your actual values.
Note about OpenRouter: We’re using OpenRouter in this example because it provides free access to various AI models that support tool calling, including Google’s Gemini models. If you prefer to use Claude directly, you can add the Anthropic Spring AI dependency and configure it accordingly.
Step 3: Creating the ChatClient Configuration
Now, we’ll create a configuration class that sets up our ChatClient with the MCP tools:
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.ai.tool.ToolCallbackProvider;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class ChatClientConfig {
@Bean
public ChatClient chatClient(ChatClient.Builder chatClientBuilder, ToolCallbackProvider tools) {
return chatClientBuilder.defaultTools(tools).build();
}
}
ChatClientConfig.javaThis simple configuration does something powerful:
- It injects a
ChatClient.Builder
and aToolCallbackProvider
- The
ToolCallbackProvider
contains all registered tools, including those from our MCP servers - We register these tools with the ChatClient by calling
defaultTools(tools)
- We build and return the configured ChatClient
Spring Boot’s auto-configuration handles the discovery and registration of MCP tools in the background, so we don’t need any additional code for this.
Step 4: Building a Controller
Finally, let’s create a REST controller to expose an endpoint for chatting with our AI:
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping("/chat")
public class ChatController {
private final ChatClient chatClient;
@Autowired
public ChatController(ChatClient chatClient) {
this.chatClient = chatClient;
}
@PostMapping("/ask")
public String chat(@RequestBody String userInput) {
return chatClient.prompt(userInput).call().content();
}
}
ChatController.javaOur controller:
- Exposes a
/chat/ask
endpoint that accepts POST requests - Takes the user’s input as the request body
- Then it calls prompt() on the ChatClient, which processes the input, triggers any necessary tool calls (to Confluence or Mongo MCP servers), and returns the final response.
- Returns the model’s response as a string
This minimalist implementation focuses on the core functionality. In a production application, you would likely add error handling, request validation, and a more structured response format.
Step 5: Testing the Implementation
With our MCP client implemented, we can now start the application and test it with some example queries. While doing so, ensure that you pass the following environment variables with their correct values:
For Confluence MCP server, pass:
- CONFLUENCE_AUTH_EMAIL = <YOUR_CONFLUENCE_AUTH_EMAIL>
- CONFLUENCE_AUTH_API_TOKEN = <YOUR_CONFLUENCE_AUTH_API_TOKEN>
- CONFLUENCE_BASE_URL = <YOUR_CONFLUENCE_BASE_URL>
For Mongo MCP server:
- MONGO_PORT = <MONGO_PORT>
- MONGO_HOST = <MONGO_HOST>
When you run your Spring Boot application:
- It will start the configured MCP servers as separate processes
- Establish connections to these servers
- Discover and register their tools
- Make these tools available to the AI model
Now you can send requests to your /chat/ask
endpoint using any REST client like Postman or cURL.
Example 1: Querying Confluence Data
Let’s say we want to find the number of documents in our Confluence space:
curl --location 'localhost:8080/chat/ask' \
--header 'Content-Type: text/plain' \
--data 'list all my confluence pages under this space key "<YOUR_CONFLUENCE_SPACE_ID>"'
cURLWhen the AI model receives this query:
- It recognizes that it needs to access Confluence
- It calls the appropriate tool provided by our Confluence MCP server
- The server queries Confluence Cloud for the relevant information
- The results are returned to the AI model
- The model incorporates this information into its response
Demo Video:
In this video, we showcase how our MCP client, built using Spring Boot AI, interacts seamlessly with the Confluence MCP Server we created in our last blog.
We demonstrate commands like:
- ✅ List all Confluence pages under a specific space 📄
- ✅ Fetch page history using a page ID 📜
- ✅ Create a new Confluence page with dummy content 📝
Example 2: Querying MongoDB Data
Now, let’s try a query that requires accessing our MongoDB database:
curl --location 'localhost:8080/chat/ask' \
--header 'Content-Type: text/plain' \
--data 'list all collections that are present under mongo database named "ecommerce"'
cURLWhen the AI model receives this query:
- The AI recognizes it needs to query the ecommerce database
- It calls the MongoDB tool provided by our MongoDB MCP server
- The server executes the appropriate query against MongoDB
- The results are returned to the AI model
- The model analyzes the data and formulates a response
Demo Video:
In this video, we showcase how the MCP client built using Spring Boot AI interacts with the MongoDB MCP server that we created in our previous blog. Follow along as we execute commands like:
- ✅ Listing all databases 📂
- ✅ Executing complex queries 🔍
- ✅ Creating a new database with a specified collection 🏗️
5. Advanced Configuration Options
Now that we have our basic MCP client working, let’s explore some advanced configuration options that might be useful for production deployments.
5.1. Using Network Connections
Instead of launching MCP servers as child processes, you might want to connect to servers running as separate services:
spring:
ai:
mcp:
client:
sse:
connections:
mongo-mcp-server:
url: http://localhost:8080
confluence-mcp-server:
url: http://localhost:8081
application.yamlThis configuration connects to MCP servers running at the specified hosts and ports, which is more suitable for containerized or microservice architectures.
5.2. Using Multiple AI Providers
You can configure your application to work with multiple AI providers:
spring:
ai:
openai:
api-key: ${OPENAI_API_KEY}
chat:
options:
model: gpt-4o
anthropic:
api-key: ${ANTHROPIC_API_KEY}
chat:
options:
model: claude-3-7-sonnet-latest
application.yamlThis allows you to create multiple ChatClient beans and choose between them based on your requirements.
5.3. Using the Claude Desktop JSON Format for MCP Servers
If you are interested in configuring your MCP servers in the Claude Desktop JSON config format, then you can refer to the example below. In this example, we have configured the Confluence and Mongo MCP servers. All other servers can be configured in a similar way.
First, add the following to your application configuration:
spring:
ai:
mcp:
client:
stdio:
servers-configuration: classpath:mcp-servers.json
application.yamlThen, create your mcp-servers.json
file inside the resources folder with the following content:
{
"mcpServers": {
"confluence-mcp-server": {
"command": "java",
"args": [
"-jar",
"<complete-path-to-confluence-mcp-jar-file>.jar"
],
"env": {
"CONFLUENCE_BASE_URL": "<your-confluence-base-url>",
"CONFLUENCE_AUTH_EMAIL": "<your-confluence-email>",
"CONFLUENCE_AUTH_API_TOKEN": "<your-confluence-auth-api-token>"
}
},
"mongo-mcp-server": {
"command": "java",
"args": [
"-jar",
"<complete-path-to-mongo-mcp-jar-file>.jar"
],
"env": {
"MONGO_HOST": "localhost",
"MONGO_PORT": "27017"
}
}
}
}
mcp-servers.jsonThis configuration uses the Claude Desktop JSON format to set up STDIO connections for the Confluence and Mongo MCP servers. You can apply the same pattern to configure any other MCP servers as needed.
Currently, the Claude Desktop format supports only STDIO connection types.
6. Source Code
You can find our implementations for the MCP Client, Confluence MCP Server, and Mongo MCP Server on out GitHub:
- SpringBoot AI MCP Client Example: View on GitHub
- Confluence MCP Server Example: View on GitHub
- Mongo MCP Server Example: View on GitHub
7. Things to Consider
When deploying your MCP client in production, consider these important factors:
- Error Handling and Resilience:
Ensure your application handles errors gracefully so that it remains responsive even when MCP servers encounter issues. Consider implementing mechanisms like circuit breakers to avoid cascading failures. - Security Considerations: Securely store your API keys and credentials, and implement proper authentication and authorization to restrict access to sensitive data.
- Performance Optimization: Optimize performance by caching frequently accessed data and monitoring the latency of tool calls.
- Scalability and Flexibility: Design your client to support multiple MCP servers and maintain modular configurations so that changes to one server do not affect the overall system. This flexibility makes it easier to integrate additional data sources in the future.
- Robust Testing and Validation: Test your MCP client thoroughly under various scenarios, including edge cases and high-load conditions. Use both unit and integration testing, and simulate real-world production conditions to ensure resilience before deployment.
8. FAQs
Can I connect to MCP servers built with technologies other than Spring?
Yes, MCP is a language-agnostic protocol. Your Spring Boot MCP client can connect to any server that implements the MCP specification, regardless of the technology used to build it.
How do I debug tool calls that aren’t working correctly?
Set the logging level for io.modelcontextprotocol.client to DEBUG to see detailed logs of tool calls. You can also implement additional logging in your MCP servers to track how they process requests.
Can I use MCP with AI models that don’t inherently support tool calling?
No, the AI model must natively support tool calling for MCP to work. Most modern models from providers like Anthropic, and Google support this capability.
How many MCP servers can I connect to simultaneously?
There’s no hard limit on the number of MCP servers you can connect to. However, each server consumes resources, so the practical limit depends on your hardware capabilities.
9. Conclusion
In this blog, we’ve walked through creating a beginner-friendly MCP client using Spring Boot AI. By following the steps outlined, you can set up an MCP client that integrates multiple servers, allowing your application to communicate seamlessly with MCP servers. This approach not only simplifies integration with various data sources but also provides flexibility to incorporate additional MCP servers as needed.
10. Learn More
Interested in learning more?
Build MCP Servers with Spring Boot AI: A Beginner’s Guide
Add a Comment