Quarkus
AI Conversation:
:
java quarkus
:
Quarkus is a Kubernetes-native Java framework optimized for building cloud-native applications. It is designed for low memory usage, fast startup times, and excellent performance, making it ideal for serverless, microservices, and containerized environments.
Key Features of Quarkus
- Optimized for Containers – Quarkus is designed to work well with Kubernetes and GraalVM, offering minimal memory footprint and quick startup times.
- Supersonic and Subatomic – Achieves fast boot times (~milliseconds) and low memory consumption, crucial for cloud deployments.
- Developer Joy – Live reload, Dev UI, and a simple configuration system make development smooth and efficient.
- Extensive Library Support – Supports Hibernate ORM, RESTEasy (JAX-RS), Panache, SmallRye (MicroProfile), Kafka, and many more extensions.
- GraalVM & Native Image Support – Enables ahead-of-time (AOT) compilation for even better startup performance.
Getting Started with Quarkus
1. Create a New Quarkus Project
Use the Quarkus CLI or Maven to bootstrap a new project.
Using Quarkus CLI:
quarkus create app org.acme:my-quarkus-app
cd my-quarkus-app
Using Maven:
mvn io.quarkus.platform:quarkus-maven-plugin:create \
-DprojectGroupId=org.acme \
-DprojectArtifactId=my-quarkus-app \
-Dextensions="resteasy-reactive"
cd my-quarkus-app
2. Create a Simple REST Endpoint
Modify the generated GreetingResource.java
file in src/main/java/org/acme/
:
package org.acme;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/hello")
public class GreetingResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "Hello from Quarkus!";
}
}
3. Run the Application
Start the development mode using:
./mvnw quarkus:dev
or
quarkus dev
Now, the application is running at http://localhost:8080/hello
.
4. Build a Native Executable
To compile the application as a native executable:
./mvnw package -Pnative
Run it:
./target/my-quarkus-app-runner
Quarkus Use Cases
- Microservices
- Serverless Applications
- Containerized Applications (Docker, Kubernetes)
- High-Performance REST APIs
- Cloud-Native Enterprise Applications
:
java quarkus with artificial intelligence
:
Using Java Quarkus with Artificial Intelligence (AI) can be a powerful combination for building high-performance, cloud-native applications that leverage AI/ML capabilities. Below is a structured guide on how to integrate AI with Quarkus.
1. Why Quarkus for AI?
Quarkus is a Kubernetes-native Java framework optimized for GraalVM and HotSpot. It enables:
- Fast boot times and low memory usage
- Reactive and imperative programming styles
- Seamless cloud deployment
- Native image support for AI inference services
2. Approaches to AI with Quarkus
You can integrate AI with Quarkus using multiple approaches:
A. Using Java-based AI/ML Libraries
-
Deep Java Library (DJL)
- DJL is an open-source Java library for deep learning.
- Supports TensorFlow, PyTorch, and MXNet.
- Quarkus has extensions to simplify DJL integration.
Example: Using DJL with Quarkus
@ApplicationScoped
public class AiService {
public String predict() {
Criteria<Image, Classifications> criteria = Criteria.builder()
.optEngine("PyTorch")
.setTypes(Image.class, Classifications.class)
.build();
return "Prediction Done!";
}
} -
Tribuo
- A powerful Java ML library for classification, regression, clustering, and anomaly detection.
- Can be used in Quarkus REST endpoints.
-
DeepLearning4J (DL4J)
- Java-based deep learning framework supporting ND4J and distributed training.
B. Using External AI/ML APIs
If your AI model is deployed outside Quarkus, you can integrate it via REST or gRPC.
Example: Calling an AI Model API
@RegisterRestClient
public interface AiModelClient {
@GET
@Path("/predict")
@Produces(MediaType.APPLICATION_JSON)
PredictionResponse predict(@QueryParam("input") String input);
}
Use this client in your Quarkus service.
C. Running Python AI Models with Quarkus
If your AI model is in Python (TensorFlow, PyTorch, etc.), you can:
- Deploy it using FastAPI or Flask.
- Call it from Quarkus via REST or WebSockets.
- Use Quarkus' Python scripting support via JEP (Java Embedded Python).
3. Deploying AI Models with Quarkus
A. Running AI Models as Quarkus Microservices
You can package and deploy AI models using Quarkus and containerize them with GraalVM native images for efficiency.
Steps:
- Create a Quarkus REST endpoint serving predictions.
- Integrate the model using Java or call external AI APIs.
- Deploy as a microservice on Kubernetes/OpenShift.
B. Serverless AI Inference with Quarkus
- Use AWS Lambda, Azure Functions, or Google Cloud Run for lightweight AI inference.
- Quarkus optimizes cold start times in serverless AI applications.
4. Example: Full AI-Powered Quarkus REST API
@Path("/ai")
@ApplicationScoped
public class AiResource {
@Inject
AiService aiService;
@GET
@Path("/predict")
@Produces(MediaType.TEXT_PLAIN)
public Response getPrediction() {
String result = aiService.predict();
return Response.ok(result).build();
}
}
- This endpoint can host or call AI models.
5. Conclusion
- Quarkus + AI allows you to run efficient and scalable AI microservices.
- Use DJL, DL4J, or external AI APIs to integrate AI models.
- Deploy AI-powered applications on Kubernetes, AWS Lambda, or as native images.
How does Quarkus enable Reactive?
Quarkus is reactive. It’s even more than this: Quarkus unifies reactive and imperative programming. You don’t even have to choose: you can implement reactive components and imperative components then combine them inside the very same application. No need to use different stacks, tooling or APIs; Quarkus bridges both worlds.
Quarkus has a reactive engine. This engine, powered by Eclipse Vert.x and Netty, handles the non-blocking I/O interactions. Quarkus uses Mutiny as its central reactive programming model.
Vert.x
Quarkus uses Vert.x underneath. Quarkus applications can access and use the Vert.x APIs. This presents how you can build a Quarkus application using:
- the managed instance of Vert.x
- the Vert.x event bus
- the Vert.x Web Client
The Vert.x reference guide covers more advanced features such as verticles, and native transports.
Example
We are going to build a simple application exposing four HTTP endpoints:
- /vertx/lorem returns the content from a small file
- /vertx/book returns the content from a large file (a book)
- /vertx/hello uses the Vert.x event bus to produce the response
- /vertx/web uses the Vert.x Web Client to retrieve data from Wikipedia
The solution is located in the vertx-quickstart directory.
Mutiny Integration in Quarkus
The integration of Mutiny in Quarkus goes beyond just the library. Mutiny exposes hooks that allow Quarkus and Mutiny to be closely integrated:
- Calling await or toIterable would fail if you are running on an I/O thread, preventing blocking the I/O thread;
- The log() operator use the Quarkus logger;
- The default Mutiny thread pool is the Quarkus worker thread pool;
- Context Propagation is enabled by default when using Mutiny Uni and Multi
📄️ REST
REST
📄️ Hibernate Panache
quarkus-hibernate-reactive-panache-quickstart
📄️ Reactive SQL Clients
https://quarkus.io/guides/reactive-sql-clients
📄️ Reactive Routes
https://quarkus.io/guides/reactive-routes
🗃️ Examples
11 items
📄️ References
https://quarkus.io/