PayFlow: Building an Event-Driven Payment System with Spring Modulith
Featured project
How I designed and built a modular monolith payment system with HMAC authentication, event-driven architecture, and Spring Modulith ā as a junior Java engineer learning by doing.
PayFlow: Building an Event-Driven Payment System with Spring Modulith
I am a backend engineer. Java is not my first language. Before this project, most of my backend work was in Node.js. I understood concepts like layered architecture, event-driven design, and clean separation of concerns. But I had never fully applied them in Java at this depth.
PayFlow changed that.
This is a breakdown of what I built, the decisions I made, and what I actually learned. Not a tutorial. Not a "here is the theory" post. Just an honest account of building a payments system from scratch with Spring Boot, Spring Modulith, and PostgreSQL.
What PayFlow does
PayFlow is a payment processing backend. The full flow looks like this:
- A merchant registers and gets a raw API key
- The merchant signs every request with HMAC-SHA256
- A payment is submitted and stored as
PENDING - The fraud module evaluates it and returns a decision
- The payment transitions to
AUTHORISEDorDECLINED - An authorised payment triggers a ledger entry
- The merchant receives an email notification
- Every event is persisted in an audit log
The entire thing runs as a single deployable Spring Boot application. One process. One database. But internally, it is structured like separate services ā each module owns its schema, its logic, and its data. No module reaches into another module's tables.
That is the core of what a modular monolith is.
Why a Modular Monolith
The honest reason I chose this architecture is because I wanted to understand it before I ever touched a real microservices setup.
Everyone talks about microservices. Fewer people talk about the fact that most companies start with a monolith and that a poorly structured monolith is what leads to the "we need microservices" conversation in the first place.
A modular monolith forces you to draw clear boundaries before you have the operational complexity of distributed systems. You learn to think in terms of bounded contexts, event-driven communication, and module contracts ā without juggling multiple deployments, service discovery, or distributed transactions.
The architecture also has a practical appeal. Running a single process is simple. One docker compose up. One JVM. One database connection pool. And if the modules are clean enough, you can extract them into separate services later with minimal pain.
Spring Modulith: Boundaries With Teeth
The most important tool in this project was Spring Modulith.
The idea is simple. You organise your code into top-level packages ā one per module. Spring Modulith treats each of those packages as a bounded context. It then gives you the ability to verify, at test time, that no module is violating another module's boundaries.
Here is what the package structure looks like:
com.example.payflow
āāā payments/
āāā fraud/
āāā ledger/
āāā notifications/
āāā merchant/
āāā audit/
āāā shared/
Each of these is a module. The shared package is the only one they are allowed to import from. If fraud tried to directly call into payments.service.PaymentService, Spring Modulith would catch it.
The verification happens through a simple test:
@Test
void verifyModularStructure() {
ApplicationModules.of(PayFlowApplication.class).verify();
}
That one line runs at build time and tells you exactly which module boundary you broke and where. It is like having a linter for your architecture.
ApplicationModuleListener
The other thing Spring Modulith gave me is @ApplicationModuleListener.
Plain @EventListener in Spring works synchronously inside the same transaction. If the listener throws, it rolls back your original transaction. For a payment system, that is a problem. You do not want the fraud assessment failing to roll back the payment that was already saved.
@ApplicationModuleListener changes that. It wraps the listener in its own transaction, handles retries on failure, and integrates with Spring Modulith's EventPublicationRegistry to track delivery. If a listener fails, the framework knows it needs to retry. If the app crashes mid-processing, the event publication state is persisted and can be recovered.
Here is what it looks like in the fraud module:
@ApplicationModuleListener
public void on(PaymentTransactionInitiated event) {
var result = ruleEngine.evaluate(event.getAmount());
var assessment = FraudAssessment.create(event.getPaymentId(), event.getMerchantId(), result.score(), result.decision());
repository.save(assessment);
publisher.publish(new FraudAssessmentCompleted(
event.getCorrelationId(),
assessment.getTransactionId(),
assessment.getMerchantId(),
assessment.getScore(),
assessment.getDecision().name()
));
}
The fraud module does not know about payments. It listens to an event from shared, does its work, and publishes back to shared. The payments module listens to that result and updates the payment status. Nobody is calling anybody directly.
The Event System
Every event in the system extends DomainEvent:
public abstract class DomainEvent {
private final String eventId = UUID.randomUUID().toString();
private final String correlationId;
private final Instant occurredAt = Instant.now();
protected DomainEvent(String correlationId) {
this.correlationId = correlationId;
}
}
Every event gets a unique eventId, carries a correlationId that traces the entire payment flow from start to finish, and timestamps itself at creation. When you look at the audit log for a single payment, you see every event that ever touched it in the order they happened, all sharing the same correlationId.
The events I built:
| Event | Published By | Consumed By |
|---|---|---|
| PaymentTransactionInitiated | Payments | Fraud, Audit |
| FraudAssessmentCompleted | Fraud | Payments, Ledger, Audit |
| PaymentTransactionAuthorized | Payments | Ledger, Notifications, Audit |
| PaymentTransactionDeclined | Payments | Notifications, Audit |
| LedgerEntryPosted | Ledger | Audit |
The audit module listens to every event and records it. That gives you a full event history per payment without any module needing to know the audit module exists.
The Payment Domain Model
One of the things I was most deliberate about was keeping the domain model honest.
Payment is not an anemic data class. It has state and it protects that state:
@Entity
@Table(schema = "payments", name = "payment")
@NoArgsConstructor(access = AccessLevel.PROTECTED)
public class Payment {
@Id
@GeneratedValue(strategy = GenerationType.UUID)
private UUID id;
@Version
private Long version;
@Enumerated(EnumType.STRING)
private PaymentStatus status;
public static Payment create(UUID correlationId, String idempotencyKey, ...) {
var payment = new Payment();
payment.status = PaymentStatus.PENDING;
// ...
return payment;
}
public void authorise() {
this.status = PaymentStatus.AUTHORISED;
this.updatedAt = Instant.now();
}
public void decline() {
this.status = PaymentStatus.DECLINED;
this.updatedAt = Instant.now();
}
}
You cannot construct a Payment with a public constructor. You cannot set the status directly. You go through authorise() or decline(), which are the only valid transitions. This is a rich domain model and it makes the business rules impossible to bypass.
The @Version field handles optimistic locking. If two processes try to update the same payment at the same time, only one wins. The other gets a OptimisticLockingFailureException. This matters in payment systems where duplicate processing is a real risk.
Idempotency
Idempotency was one of the first things I thought about when designing the payments module.
A client submitting a payment might lose the network connection after sending the request. They will retry. If your system is not idempotent, you create two payments for one intent.
The fix is simple but needs to be done correctly. Clients send an X-Idempotency-Key header with every submission. Before creating a new payment, the service checks if that key already exists:
public PaymentResponse submit(UUID merchantId, SubmitPaymentRequest request) {
var existing = paymentRepository.findByIdempotencyKey(request.idempotencyKey());
if (existing.isPresent()) {
return toResponse(existing.get());
}
// create the payment
}
The idempotency key has a unique constraint at the database level too. So even if two requests arrive simultaneously, the second one will fail at the DB constraint and the first one wins. You always return the same response for the same key.
Security: HMAC Request Signing
I wanted merchants to authenticate with something more interesting than a plain API key in a header. HMAC request signing was the answer.
The concept: the merchant has a secret key. For each request, they compute a signature over the timestamp and request body using HMAC-SHA256. The server independently computes the same signature and compares them. If they match, the request is authentic and untampered.
public boolean verify(String secret, String timestamp, String body, String signature) {
var expected = computeSignature(secret, timestamp, body);
return MessageDigest.isEqual(
expected.getBytes(StandardCharsets.UTF_8),
signature.getBytes(StandardCharsets.UTF_8)
);
}
private String computeSignature(String secret, String timestamp, String body) {
var mac = Mac.getInstance("HmacSHA256");
mac.init(new SecretKeySpec(secret.getBytes(StandardCharsets.UTF_8), "HmacSHA256"));
var payload = timestamp + "." + body;
return HexFormat.of().formatHex(mac.doFinal(payload.getBytes(StandardCharsets.UTF_8)));
}
The filter also checks that the timestamp is not more than 5 minutes old. This blocks replay attacks ā someone intercepting a valid signed request and resending it later.
MessageDigest.isEqual does constant-time comparison, which prevents timing attacks where an attacker can guess characters of the expected signature by measuring how long the comparison takes.
To generate the signature on the client side:
TIMESTAMP=$(date +%s)
BODY='{"payeeAccountId":"...","amount":"150.00",...}'
SIGNATURE=$(printf '%s.%s' "$TIMESTAMP" "$BODY" | openssl dgst -sha256 -hmac "$API_KEY" -hex | sed 's/^.* //')
API Key Management
The merchant module handles the full lifecycle of API keys.
When a merchant registers, a 32-byte cryptographically random key is generated with SecureRandom, encrypted with AES and stored. The raw key is returned once to the merchant and never stored in plaintext.
Keys expire after 90 days. There is a rotation endpoint for when merchants need a new key. A scheduled job runs every night at 2am and purges all expired or revoked keys from the database.
@Scheduled(cron = "0 0 2 * * *")
@Transactional
public void purgeExpiredAndRevokedKeys() {
apiKeyRepository.deleteByActiveFalseOrExpiresAtBefore(Instant.now());
}
Every authentication call records lastUsedAt on the key. This gives you visibility into which keys are actively being used.
Database Design
Each module has its own PostgreSQL schema. payments, fraud, ledger, merchant, audit ā each isolated. No joins across schemas at the database level. Cross-module data aggregation happens in the application layer through events.
The migrations are managed with Flyway and placed under db/migration/postgresql/. Every migration is versioned and written with proper constraints:
CREATE TABLE IF NOT EXISTS payments.payment
(
id UUID NOT NULL DEFAULT gen_random_uuid() PRIMARY KEY,
correlation_id UUID NOT NULL,
idempotency_key VARCHAR(128) NOT NULL,
amount DECIMAL(18, 4) NOT NULL,
status VARCHAR(20) NOT NULL,
version BIGINT NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL,
updated_at TIMESTAMPTZ NOT NULL,
CONSTRAINT uq_payment_idempotency_key UNIQUE (idempotency_key),
CONSTRAINT chk_payment_status CHECK (status IN ('PENDING', 'AUTHORISED', 'DECLINED')),
CONSTRAINT chk_payment_amount CHECK (amount > 0)
);
TIMESTAMPTZ instead of TIMESTAMP means timestamps are always stored in UTC. The CHECK constraints enforce valid states at the database level, not just the application level. The unique constraint on idempotency_key is the last line of defence against duplicate payments.
Testing
The test suite covers 33 tests across every module. Zero failures.
Unit tests use JUnit 5 with Mockito and AssertJ. Controller tests use @SpringBootTest with MockMvc. The test configuration uses H2 in PostgreSQL compatibility mode so the in-memory database behaves close to production.
Here is a representative test showing how I verified the event chain in the fraud module:
@Test
void on_persistsApproveDecisionAndPublishesCompletedEvent() {
when(repository.save(any(FraudAssessment.class))).thenAnswer(i -> i.getArgument(0));
var event = initiatedEvent(new BigDecimal("150.00"));
listener.on(event);
var assessmentCaptor = ArgumentCaptor.forClass(FraudAssessment.class);
verify(repository).save(assessmentCaptor.capture());
assertThat(assessmentCaptor.getValue().getDecision()).isEqualTo(FraudDecision.APPROVE);
var eventCaptor = ArgumentCaptor.forClass(FraudAssessmentCompleted.class);
verify(publisher).publish(eventCaptor.capture());
assertThat(eventCaptor.getValue().getDecision()).isEqualTo("APPROVE");
}
The test verifies two things at once: the assessment was persisted with the right decision, and the correct event was published with the right data. ArgumentCaptors let you inspect exactly what was passed to your dependencies without needing the real implementations.
What Actually Runs
The full running system:
docker compose up -d # starts PostgreSQL
./mvnw spring-boot:run # starts the app
Flyway runs the migrations automatically on startup. Virtual threads are enabled with spring.threads.virtual.enabled=true ā one line and the JVM uses Project Loom under the hood.
A real payment flow through the system end to end:
# Register
curl -X POST http://localhost:8080/api/v1/merchants/register \
-H "Content-Type: application/json" \
-d '{"name": "Acme Corp", "email": "merchant@example.com"}'
# Sign and submit payment
TIMESTAMP=$(date +%s)
BODY='{"payeeAccountId":"...","idempotencyKey":"key-001","amount":"250.00","currency":"USD","paymentMethod":{"type":"CARD","token":"tok_test"}}'
SIGNATURE=$(printf '%s.%s' "$TIMESTAMP" "$BODY" | openssl dgst -sha256 -hmac "$API_KEY" -hex | sed 's/^.* //')
curl -X POST http://localhost:8080/api/v1/payments \
-H "Content-Type: application/json" \
-H "X-Merchant-ID: $MERCHANT_ID" \
-H "X-Timestamp: $TIMESTAMP" \
-H "X-Signature: $SIGNATURE" \
-d "$BODY"
# Check the audit trail
curl http://localhost:8080/api/v1/payments/{id}/events \
-H "X-Merchant-ID: $MERCHANT_ID" \
-H "X-Timestamp: $TIMESTAMP" \
-H "X-Signature: $SIGNATURE"
The audit trail response shows every event in the order they occurred:
{
"data": [
{ "eventType": "Payment.Transaction.Initiated", "occurredAt": "..." },
{ "eventType": "Fraud.Assessment.Completed", "occurredAt": "..." },
{ "eventType": "Payment.Transaction.Authorised", "occurredAt": "..." },
{ "eventType": "Ledger.Entry.Posted", "occurredAt": "..." }
]
}
What I Would Fix
A few things I know need improvement and plan to address:
AES encryption mode. The API key encryption currently uses AES/ECB, which is cryptographically weak. ECB does not use an IV, which means identical inputs produce identical outputs. The fix is AES/GCM/NoPadding with a random IV stored alongside the ciphertext.
Committed credentials. The application.properties file has hardcoded development fallback values including a database password and a Gmail app password. Those need to go. The fallback defaults should be empty, and the app should fail fast on startup if required secrets are not provided.
Module boundary violations. The payments module imports directly from audit.service, and notifications imports from merchant.service. Both of those are coupling that should not exist. The merchant email should travel as part of the event payload. The audit trail should be built entirely from events without the payments module needing to call into it.
Fraud rule engine. One threshold check is not a rule engine. I want to add multiple configurable rules, a scoring system, and the ability to add new rules without touching the core evaluation logic.
These are not excuses. They are the next things I am building.
What Building This Taught Me
Before this project, I understood event-driven architecture conceptually. I had read about it. After this project, I understand it practically. I know what it feels like to design a system where modules are genuinely decoupled, where adding a new consumer means writing a new listener and not touching any existing code.
I understand why @ApplicationModuleListener exists and why plain @EventListener is not enough for a system where reliability matters. I understand what a domain model with real behaviour looks like versus a data class with getters and setters.
I understand why idempotency is not optional in payment systems. I understand what a Flyway migration strategy looks like and why you want TIMESTAMPTZ over TIMESTAMP. I understand what it means to think about module boundaries before you write the code, not after you have ten thousand lines of entangled logic.
Java is not my first language. But this project has made me significantly more confident with it. The next thing I build in Java will be better because of what I figured out while building this.
The code is on GitHub if you want to look at it. There are issues in it ā I listed them above. But it runs, it is tested, and it does what it says it does.
That is where I am right now. Building the next version.