Saga Orchestration – A Copy-and-Paste Implementation

Requirements:

  • BPMN activities:
    • service task activity
    • fork and split gateway for a parallel execution of activities
    • compensation activity
  • compensation logics
    • automatic trigger of compensation on exception
    • reverse order on compensation
  • store, read, and pass variables
  • save point after each activity for fault tolerance (e.g., DB or file system)
  • visualization of transactions for debugging and manual compensation purposes

Scenario:

  1. create pending order
  2. reduce inventory
  3. fulfil payment
  4. publish order

Start a saga instance:

// start saga instance from saga model "order creation"
orderCreationSaga = sagaFactory.startNewInstanceFromSagaModel("order creation")
orderId = orderCreationSaga.getVariables().getOrWait("orderId", 500, Millis)
orderCreationSaga.waitForCompletion(3_000, Millis)

The task to execute a single saga step:

	public SagaStepRunnable(SagaInstance sagaInstance, SagaActivity sagaActivity) {
		this.sagaInstance = sagaInstance;
		this.sagaActivity = sagaActivity;
	}

	@Override
	public void run() {
		SagaLog sagaLog = sagaInstance.getSagaLog();

		try {
			sagaActivity.execute(sagaInstance.getVariables());
			// if the thread crashes at this point, the saga restarts after the last successful activity.
			// hence, all activities (local and remote) must be idempotent.
			sagaLog.storeSuccessEventFor(sagaActivity, sagaInstance.getVariables());
		} catch (Exception e) {
			sagaLog.storeFailureEventFor(sagaActivity, sagaInstance.getVariables(), e);
			sagaInstance.switchToCompensationMode();
		}

		sagaInstance.scheduleNextActivities();
	}

Questions:

  • necessary to sync “entity creation for order” with “current variables” and “success event”?
  • remote AND local services must be idempotent?

Get Control of I/O and Time in Your Unit Tests!

In your production environment, your applications usually have no control of I/O devices and the system clock. However, in tests you want to have control of these external dependencies. Dave Farley greatly motivates this wish of control in the video linked below. Although he talks about acceptance tests, the statement equally holds for unit tests.

In a nutshell: access to I/O devices reduces the performance of your test and the reliability of your test results. If your application uses the system clock to schedule tasks at particular moments in time, the performance is even heavier affected because your test has to wait for the tasks to be executed.

But how can we get rid of these disadvantages? Our goal is to access the I/O devices and the system clock in an ordinary fashion, if our application runs in production, and in a controlled fashion, if it runs somewhere else.

The approach

A simple, but effective approach is to introduce an abstract interface for each I/O device and for the system clock in your application. By default, it uses the ordinary implementations for the interfaces. However, if you run its unit tests, your application uses in-memory implementations instead. In this way, unit tests can run as fast as possible, without any (external) dependencies and independent from each other — that is, they can run in parallel.

In the following sections, we have a look at common use cases.

Get control of the file system!

First, introduce an interface that represents the central access point in your application to all operations on the file system:

public interface Filesystem {

   void writeTextToFile(String text, Path path, Charset charset) throws IOException;
   List<String> readLinesFromFile(Path path, Charset charset) throws IOException;
   // Add further access methods here. At best, you only declare methods, your application really uses. Not more.

}

Second, pass the file system interface to each class in your application that wants to access the file system:

public class ArticleWriter {

   private final Filesystem filesystem;

   public ArticleWriter(Filesystem filesystem) {
      this.filesystem = filesystem;
   }

   public void write(Article article, Path path) throws IOException {
      this.filesystem.writeTextToFile(article.toString(), path, StandardCharsets.UTF_8);
   }
}
public class PersonNamesReader {

   private final Filesystem filesystem;

   public PersonNamesReader(Filesystem filesystem) {
      this.filesystem = filesystem;
   }

   public List<String> read(Path path) throws IOException {
      this.filesystem.readLinesFromFile(path, StandardCharsets.UTF_16);
   }
}

Third, adapt the available unit tests from these classes:

class ArticleWriterTest {
   
   private Filesystem filesystem;
   private ArticleWriter articleWriter;

   @BeforeEach
   void init() {
      filesystem = new UnittestFilesystem();
      articleWriter = new ArticleWriter(filesystem);
   }

   @Test
   void withValidArticleAndValidFilePathShouldWriteToFileWithoutException() throws IOException {
      // given
      Article article = new Article().title("Get Control!").text("lorem ipsum");
      String filepath = "./articles/";

      // when
      ThrowingCallable writeArticleCallable = () -> articleWriter.write(article, Paths.get(filepath));

      // then
      assertThatCode(writeArticleCallable).doesNotThrowAnyException();
}
class PersonNameReaderTest {
   
   private Filesystem filesystem;
   private PersonNamesReader personNamesReader;

   @BeforeEach
   void init() { 
      filesystem = new UnittestFilesystem();
      personNamesReader = new PersonNamesReader(filesystem);
   }

   @Test
   void withValidFilePathAndTwoPersonNamesShouldReadTwoPersonNames() throws IOException {
      // given
      String filepath = "./articles/";
      List<String> personNamesInFile = Arrays.asList("Dave", "Christian");
      filesystem.writeLinesToFile(personNamesInFile, filepath, StandardCharsets.UTF_8);

      // when
      List<String> personNames = personNamesReader.read(Paths.get(filepath));

      // then
      assertThat(personNames).containsExactly(personNamesInFile);
}

Fourth, provide the ordinary and the in-memory implementation for the file system interface:

public class JavaNioFilesystem implements Filesystem {

   public void writeTextToFile(String text, Path path, Charset charset) {
      Files.write(path, text.getBytes(charset));
   }

   public List<String> readLinesFromFile(Path path, Charset charset) {
      return Files.readAllLines(path, charset);
   }
}
public class UnittestFilesystem implements Filesystem {

   private final Map<String,byte[]) files = new HashMap<>();

   public void writeTextToFile(String text, Path path, Charset charset) throws IOException {
      files.put(path, text.getBytes(charset));
   }

   public List<String> readLinesFromFile(Path path, Charset charset) throws IOException {
      byte[] fileBytes = files.get(path);
      String text = new String(fileBytes, charset);
      String[] lines = text.split("\\n");
      return Arrays.asList(lines);
   }
}  

Fifth, configure the default for production (here implemented with Spring):

@Configuration
class BeanDeclarations {

   @Bean
   Filesystem javaNioFilesystem() {
      return new JavaNioFilesystem();
   }

   @Bean
   ArticleWriter articleWriter(Filesystem filesystem) {
      return new ArticleWriter(filesystem);
   }

   @Bean
   PersonNamesWriter personNamesWriter(Filesystem filesystem) {
      return new PersonNamesWriter(filesystem);
   }
}

The UnittestFilesystem is created by hand without injection for each test.

Get control of the socket communication!

First, introduce an interface that represents the central access point in your application to all operations on the system’s sockets:

public interface Socketsystem {

   void writeBytesToAddress(byte[] bytes, String ipAddress, int port) throws IOException;
   // Add further access methods here. At best, you only declare methods, your application really uses. Not more.

}

Second, third, fourth and fifth are analogous to the file system from above.

Get control of the system’s time!

If your application does something that happens at a scheduled point in time, your unit test should NEVER wait for it. Instead, it should set this point in time such that the scheduled task is executed immediately. The same applies for methods that return values depending on the current point in time, for example, getAge() or isOverdue().

First, introduce an interface that represents the central access point in your application to the system’s time:

public interface SystemTime {

   DateTime getCurrentSystemTime();
   /** @return the new system's time after traveling (forth or back) the given amount of time. */
   DateTime travel(int time, TimeUnit timeUnit);
   // Add further access methods here. At best, you only declare methods, your application really uses. Not more.

}

Second, pass the system’s time interface to each class in your application that wants to access the system’s time. Unfortunately, you often need to invest more effort than that when using a scheduler library because it usually uses the operating system’s time directly without any possibilities to manipulate the scheduler’s point in time.

Third, fourth and fifth are analogous to the file system from above.

Analysis of the additional effort

In a nutshell: the additional effort is minimal. Thus, I recommend to always use this approach for any I/O access (including the system’s clock).

In more detail: for each I/O device, you have to introduce an additional interface with its two implementations: the ordinary one and the in-memory alternative. Once declared, you can access the I/O device in a similar way as you would access it directly.

How to enforce this approach?

If you like to enforce this approach in a particular application (or in the whole landscape of your company’s applications), try to formulate a corresponding rule and let it automatically be checked continuously.

One way of realization is to allow, for example, access to the file system only in a single class named “Filesystem”. For all of the other classes in the application applies: access to the Java’s file API is prohibited. Rules for accessing the socket API and the system’s clock are analogous. Thus, the access is limited to a single class with a hard-coded class name.

To enable a step-by-step migration of a large legacy code base, these rules could be restricted to new source code. Alternatively, the number of rule violations in the current commit may never be more than the number determined in the previous commit.

Transactional Outbox Implementations

The transactional outbox pattern allows to update the state of a database and to send a message to a message broker in one single transaction. In this way, either both or none of these actions happen. In other words: they are executed atomically. Details of this pattern can be found here: https://microservices.io/patterns/data/transactional-outbox.html.

There are numerous approaches to implement this pattern. In the following, we discuss those approaches that are most reasonable to me.

Polling Publisher

Using a Scheduled Database Poller

Polling the database in a regular time intervall is the straight-forward implementation of the “Polling Publisher”.

Scheduled Database Poller with Java and Spring:

@Scheduled
void pollDatabase() {
   Optional<OutboxEvent> event = this.repository.findFirstByTimestampByOrderAsc();
   event.ifPresent(this.eventService::send);
}

EventService with Java and Spring:

@Transactional
@Retryable(..)
public void send(OutboxEvent event) {
   this.kafkaTemplate.send(..);

   this.repository.deleteById(event.getId());
}

Attention must be paid to a multi-instance operation. If multiple instances share the same database, they also share the same outbox table. As a consequence, they interfere with each other when processing outbox entries.

Advantages: This approach is easy to implement and requires no special infrastructure.

Disadvantages: The scheduler performs polling and thus stresses the database unnecessarily. Moreover, running multiple instances requires a more complex implementation to keep the order of events.

Requirements: Apparently, this approach requires a database table to store the outbox events. Moreover, it requires a poller – either executed by a thread or by a dedicated application.

What about using an in-memory queue as optimization to avoid polling?

Unfortunately, this approach either looses events or falls back to polling depending on whether the notification is passed after or within the transaction.

Using a Workflow Engine

Bernd Ruecker proposes to use a workflow engine in order to implement the transactional outbox pattern. The corresponding process model is illustrated in the following figure.

The underlying workflow engine remembers the position in the process model, so that it does not execute tasks again which were already executed completely. In this way, the workflow engine resumes with the task that has not been finished and should be processed next according to the process model.

Hence, If the application crashes before completing the database operation task, the process instance resumes at this point and re-executes the database operation task. If the application crashes before completing the notification task, the process instance resumes at this point and re-executes the notification task.

Strictly speaking, this approach does not represent the pattern anymore, because it does not make use of an outbox table. Nonetheless, it puts the pattern’s underlying idea into practice.

Advantages: This approach requires neither a dedicated outbox table, nor special infrastructure or a custom scheduler. Moreover, we can make use of the monitoring and operations capabilities of the workflow engine tooling to debug pending tasks.

Disadvantages: The integrated job scheduler of the workflow engine still performs polling. Moreover, this approach does not keep the order of events.

Requirements: Apparently, this approach requires a workflow engine. If the engine and the process model should be hidden from the application developer, a transactional outbox API is recommended that encapsulates these implementation details.

Why does this approach work?

The alternative approaches remember the intent to send an event by storing an entry in the outbox table. In contrast, this approach remembers the intent by storing the current position and the event’s data in the process instance. Of course, this data is also represented by database tables. However, these tables are managed automatically by the workflow engine, and not by the application developer.

Transaction Log Tailing

TODO no polling

Best-Practices for Transactions with Spring

Spring’s @Transactional is a great idea to reduce the boiler plate code usually necessary to begin and end a transaction:

Without @Transactional in pseudo-code:

public void saveOrder(Order order) {
   transaction = transactionManager.beginTransaction();
   try {
      // here: save the order
      transaction.commit();
   } catch (Exception e) {
      transaction.rollback();
      throw e;
   }
}

With @Transactional:

@Transactional
public void saveOrder(Order order) {
   // here: save the order
}

Both methods execute the same code. However, the second implementation is much shorter.

How does it work? Let’s quote a short description:

At a high level, Spring creates proxies for all the classes annotated with @Transactional, either on the class or on any of the methods. The proxy allows the framework to inject transactional logic before and after the running method, mainly for starting and committing the transaction.

https://www.baeldung.com/transaction-configuration-with-jpa-and-spring#1-transactions-and-proxies

The following figure illustrates this approach:

When invoking saveOrder(..) from within some external method, saveOrder(..) of the proxy class is called. The proxy starts the transaction and then delegates to the original saveOrder(..). After returning, the proxy commits or rollbacks the transaction.

This approach has some important implications. Let’s quote again:

What’s important to keep in mind is that, if the transactional bean is implementing an interface, by default the proxy will be a Java Dynamic Proxy. This means that only external method calls that come in through the proxy will be intercepted. Any self-invocation calls will not start any transaction, even if the method has the @Transactional annotation.

Another caveat of using proxies is that only public methods should be annotated with @Transactional. Methods of any other visibilities will simply ignore the annotation silently as these are not proxied.

https://www.baeldung.com/transaction-configuration-with-jpa-and-spring#1-transactions-and-proxies

With regard to our figure from above, the call from within updateOrder(..) to saveOrder(..) will not start any transaction since it is invoked from within the same class. Furthermore, a call to deleteOrder(..) will not start any transaction either, since it is not public.

Do not use a “self” proxy reference in this case. It’s hard to understand. Share a private method instead. Or copy and paste the corresponding code section. For details, read https://codete.com/blog/5-common-spring-transactional-pitfalls.

Rollbacks

By default only unchecked exceptions trigger rollbacks. So, try to avoid throwing checked exceptions within transactions. For details, read For details, read https://codete.com/blog/5-common-spring-transactional-pitfalls.

References

  • https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/transaction.html#transaction-declarative-rolling-back
  • https://blog.frankel.ch/a-spring-hard-fact-about-transaction-management/
  • https://stackoverflow.com/a/23934667
  • https://codete.com/blog/5-common-spring-transactional-pitfalls/

Testing Spring Controllers with Spring Security, Embedded LDAP, and Different Application Contexts

To test a secured Spring application with Spring Security and LDAP authentication, we need to take the following steps:

  1. Activate and configure your embedded LDAP server for your tests.
  2. Define users and roles of your choice for your embedded LDAP server.
  3. Write Spring Tests.

Activate and configure your embedded LDAP server

There are at least the two following possibilities to start an embedded LDAP server with Spring:

  1. With Spring Boot
  2. With Spring Security

With Spring Boot

Spring Boot starts for each application context an embedded LDAP server automatically if Spring Boot finds an LDAP server implementation in the classpath and you declare some configuration properties in the application.properties. For the LDAP server implementation called unboundid, you need to declare at least the following properties:

spring.ldap.embedded.ldif=classpath:bootstrap.ldif
spring.ldap.embedded.base-dn=dc=springframework,dc=org

# Further properties are:
spring.ldap.embedded.port
spring.ldap.embedded.validation.enabled
spring.ldap.embedded.credential.username
spring.ldap.embedded.credential.password
spring.ldap.embedded.ur

As highlighted above, Spring Boot starts an embedded LDAP server for each application context. Logically, that means, it starts an embedded LDAP server for each test class. Practically, this is not always true since Spring Boot caches and reuses application contexts. However, you should always expect that there is more than one LDAP server running while executing your tests. For this reason, you may not declare a port for your LDAP server. In this way, it will automatically uses a free port. Otherwise, your tests will fail with “Address already in use”.

With Spring Security

If you want to use – for your tests – an embedded LDAP server as your central authentication management system with Spring Security, then you need to configure Spring Security as follows. Tip: Use two Spring Security configuration classes: one in your src/main/java and one in your src/test/java. The one in src/main/java could simply connect to an existing, non-embedded LDAP server. The one in src/test/java starts one embedded LDAP server for each new test context.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.crypto.factory.PasswordEncoderFactories;
import org.springframework.security.crypto.password.PasswordEncoder;

@Configuration
@EnableWebSecurity
public class SecurityTestConfiguration extends WebSecurityConfigurerAdapter {

	private static final Logger LOGGER = LoggerFactory.getLogger(SecurityTestConfiguration.class);

	@Override
	protected void configure(AuthenticationManagerBuilder auth) throws Exception {
		PasswordEncoder passwordEncoder = PasswordEncoderFactories.createDelegatingPasswordEncoder();
		// @formatter:off
		auth.ldapAuthentication()
			.userDnPatterns("uid={0},ou=people")
			.groupSearchBase("ou=groups")
			.contextSource()	// starts an LDAP server if url is not provided
				.ldif("classpath:bootstrap-spring-security.ldif")
				.root("dc=springframework,dc=org")
				// automatically adds the entry indicated by .root()
				.and()
			.passwordCompare()
				.passwordEncoder(passwordEncoder)
				.passwordAttribute("userPassword")
			.and();
		// @formatter:on
		LOGGER.info("Security configuration loaded.");
	}

	@Override
	protected void configure(HttpSecurity http) throws Exception {
		// @formatter:off
		http.sessionManagement()
			.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
			.and();
		http.authorizeRequests()
			.anyRequest().authenticated()
			.and()
		.httpBasic();
		// @formatter:on
	}
}

If you do not declare a URL, but an LDIF file path instead, then Spring Security starts an embedded LDAP server automatically – again – for each application context.

One very important difference to Spring Boot is that the root entry is automatically created by Spring Security! So, your LDIF file must not contain the root entry. Otherwise, your application or your tests will fail with “An entry with DN ‘<dn of your root entry>’ already exists in the server.” We highlight this critical root entry in the next section.

Define users and roles of your choice for your embedded LDAP server

Below, you find an example LDIF file. The first entry is the root entry whose dn has to be used as value for the property spring.ldap.embedded.base-dn or, respectively, as argument for the method root(). Note that Spring Security automatically adds the root entry to the LDAP server. Hence, your LDIF file may not contain the root entry.

dn: dc=springframework,dc=org
objectclass: top
objectclass: domain
objectclass: extensibleObject
dc: springframework

dn: ou=groups,dc=springframework,dc=org
objectclass: top
objectclass: organizationalUnit
ou: groups

dn: ou=subgroups,ou=groups,dc=springframework,dc=org
objectclass: top
objectclass: organizationalUnit
ou: subgroups

dn: ou=people,dc=springframework,dc=org
objectclass: top
objectclass: organizationalUnit
ou: people

dn: ou=space cadets,dc=springframework,dc=org
objectclass: top
objectclass: organizationalUnit
ou: space cadets

dn: ou=\"quoted people\",dc=springframework,dc=org
objectclass: top
objectclass: organizationalUnit
ou: "quoted people"

dn: ou=otherpeople,dc=springframework,dc=org
objectclass: top
objectclass: organizationalUnit
ou: otherpeople

dn: uid=ben,ou=people,dc=springframework,dc=org
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: Ben Alex
sn: Alex
uid: ben
userPassword: {SHA}nFCebWjxfaLbHHG1Qk5UU4trbvQ=

dn: uid=bob,ou=people,dc=springframework,dc=org
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: Bob Hamilton
sn: Hamilton
uid: bob
userPassword: {noop}bobspassword

dn: uid=joe,ou=otherpeople,dc=springframework,dc=org
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: Joe Smeth
sn: Smeth
uid: joe
userPassword: joespassword

dn: cn=mouse\, jerry,ou=people,dc=springframework,dc=org
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: Mouse, Jerry
sn: Mouse
uid: jerry
userPassword: jerryspassword

dn: cn=slash/guy,ou=people,dc=springframework,dc=org
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: slash/guy
sn: Slash
uid: slashguy
userPassword: slashguyspassword

dn: cn=quote\"guy,ou=\"quoted people\",dc=springframework,dc=org
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: quote\"guy
sn: Quote
uid: quoteguy
userPassword: quoteguyspassword

dn: uid=space cadet,ou=space cadets,dc=springframework,dc=org
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: Space Cadet
sn: Cadet
uid: space cadet
userPassword: spacecadetspassword

dn: cn=developers,ou=groups,dc=springframework,dc=org
objectclass: top
objectclass: groupOfUniqueNames
cn: developers
ou: developer
uniqueMember: uid=ben,ou=people,dc=springframework,dc=org
uniqueMember: uid=bob,ou=people,dc=springframework,dc=org

dn: cn=managers,ou=groups,dc=springframework,dc=org
objectclass: top
objectclass: groupOfUniqueNames
cn: managers
ou: manager
uniqueMember: uid=ben,ou=people,dc=springframework,dc=org
uniqueMember: cn=mouse\, jerry,ou=people,dc=springframework,dc=org

dn: cn=submanagers,ou=subgroups,ou=groups,dc=springframework,dc=org
objectclass: top
objectclass: groupOfUniqueNames
cn: submanagers
ou: submanager
uniqueMember: uid=ben,ou=people,dc=springframework,dc=org

Write Spring Tests

And here are two example tests which tests the same thing but each with a different application context (see the different class annotations). As you will notice in the console output, two LDAP servers with different ports will be started and the tests will pass.

import static org.hamcrest.Matchers.*;
import static org.springframework.security.test.web.servlet.request.SecurityMockMvcRequestPostProcessors.*;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.*;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.http.MediaType;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.request.MockHttpServletRequestBuilder;
import org.springframework.test.web.servlet.request.MockMvcRequestBuilders;

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureMockMvc
public class HelloController1Test {

	@Autowired
	private MockMvc mvc;

	@Test
	public void getHelloWithValidLogin() throws Exception {
		// @formatter:off
		mvc.perform(get("/").with(httpBasic("bob", "bobspassword")).with(csrf()))
			.andExpect(status().isOk())
			.andExpect(content().string(equalTo("Greetings from Spring Boot!")));
		// @formatter:on
	}

	@Test
	public void getHelloWithInvalidPassword() throws Exception {
		MockHttpServletRequestBuilder requestBuilder = MockMvcRequestBuilders.get("/")
				.accept(MediaType.APPLICATION_JSON);

		mvc.perform(requestBuilder.with(httpBasic("bob", "wrong password"))).andExpect(status().isUnauthorized());
	}
}
import static org.hamcrest.Matchers.*;
import static org.springframework.security.test.web.servlet.request.SecurityMockMvcRequestPostProcessors.*;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.*;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.data.ldap.AutoConfigureDataLdap;
import org.springframework.boot.test.autoconfigure.web.servlet.WebMvcTest;
import org.springframework.context.annotation.Import;
import org.springframework.http.MediaType;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.request.MockHttpServletRequestBuilder;
import org.springframework.test.web.servlet.request.MockMvcRequestBuilders;

import chw.tutorial.springboot.SecurityTestConfiguration;

@RunWith(SpringRunner.class)
@WebMvcTest(value = HelloController.class)
@Import(SecurityTestConfiguration.class)
@AutoConfigureDataLdap
public class HelloController3Test {

	@Autowired
	private MockMvc mvc;

	@Test
	public void getHelloWithValidLogin() throws Exception {
		// @formatter:off
		mvc.perform(get("/").with(httpBasic("bob", "bobspassword")).with(csrf()))
			.andExpect(status().isOk())
			.andExpect(content().string(equalTo("Greetings from Spring Boot!")));
		// @formatter:on
	}

	@Test
	public void getHelloWithInvalidPassword() throws Exception {
		MockHttpServletRequestBuilder requestBuilder = MockMvcRequestBuilders.get("/")
				.accept(MediaType.APPLICATION_JSON);

		mvc.perform(requestBuilder.with(httpBasic("bob", "wrong password"))).andExpect(status().isUnauthorized());
	}
}

Pitfalls

As described above, you may get one of the following error messages if you have not correctly configured your embedded LDAP server. In this blog post, we explained why these errors occur and gave appropriate solutions.

  1. “Address already in use: NET_bind”
  2. “com.unboundid.ldap.sdk.LDAPException: An entry with DN ‘dc=springframework,dc=org’ already exists in the server.”

Scheduling Tasks/Jobs with Spring (Boot)

Static Scheduling

Spring offers the annotation @Scheduled to define a task and its corresponding scheduling, e.g., execute this method every 5 minutes. The annotation saves you a great deal of work: in the background, it creates or looks up a scheduler, creates a task which invokes your method, and passes the task to the scheduler with your scheduling arguments (here: every 5 minutes).

Scheduling Parameters

The annotation @Scheduled allows to specify a fixed delay (in ms), a fixed rate (in ms), or a more flexible cron expression if the first two options are not expressive enough for your use case. The following code snippet shows an implementation of our “every 5 seconds”-example from above:

@Component // or any subtype like @Service
public class AnyComponent {

    private static final Logger log = LoggerFactory.getLogger(AnyComponent.class);

    private static final SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");

    @Scheduled(fixedRate = 5000)
    public void reportCurrentTime() { // visibility is irrelevant: even private is possible
        log.info("The time is now {}", dateFormat.format(new Date()));
    }
}
Task Method Requirements

The method, which is annotated with @Scheduled, must fulfill the following requirements:

  • The enclosing class of the method must be a Spring component. That is, the class must be annotated with the annotation @Component or an annotation which includes @Component like @Service, for example.
  • The method must be void.
  • The method must have no parameters.
  • The method may be private.
Read Parameter Values from a Properties File

If you do not want to specify the delay or the rate directly in the code, you can read it from the configuration context. For this purpose, Spring provides the parameters fixedDelayString and fixedRateString. The following example reads the seconds of the delay from the configuration key my.property.fixed.delay.seconds. If the value is not available or invalid, a default value of 60 is used. Since the delay expects a value in milliseconds, 000 is appended to the read value.

// with a default value of 60
@Scheduled(fixedDelayString = "${my.property.fixed.delay.seconds:60}000")

Further examples:

// without a default value
@Scheduled(fixedDelayString = "${my.property.fixed.delay.seconds}000")
// without appending
@Scheduled(fixedDelayString = "${my.property.fixed.delay.milliseconds}")
// hard-coded value as string (not recommended due to missing static type checking)
@Scheduled(fixedDelayString = "7000")

With these *String variants, you can define the delay or, respectively, the rate from aproperties file. Note that the value is read only once at startup time. Thus, this approach is still static. You cannot change the value at runtime. We refer to the Javadoc of @Scheduled for more detailed information on the parameters.

Dynamic Scheduling

If you need to read your scheduling arguments dynamically, that is, at runtime, then you @Scheduled is not sufficient. Instead, your can use Spring’s interface TaskScheduler. Declare a field with this type and annotate it with Spring’s @Autowired annotation. Then, you can pass a dynamically created task to the scheduler and specify whether it should be executed once or repeatable with a fixed delay or at a fixed rate. We refer to the documentation of the TaskScheduler for more information.

Sources

Pitfalls with “Convention over Configuration”

The paradigm Convention over Configuration is a great way to develop software (web) application. I like it…IFF it is implemented well.

Problem

Unfortunately, many frameworks which rely on this paradigm are hard to work with. One example is Spring Boot. Users who are new to Spring Boot must read many pages of documentation to learn the “sensible defaults”. These defaults are often not intutive to new users because the users do not yet know the underlying concepts for which the defaults are ment to be. Hence, the first steps with Spring Boot take time – more time than necessary.

Moreover, the defaults seem to work in a magical way – just by putting some new annotations at a class or a method. The javadoc often does not explain how the annotations work. It’s so easy to add a short text that describes that Spring searches the classpath for annotations.

Furthermore, annotations can also cause incompatibility issues with other annotations and conventions. For example, the annotation @DataJpaTest disables full auto-configuration such that you wonder why auto-wiring is not working anymore. Although the javadoc mentions this issue, it is unnecessarily complicated. I wish that adding annotations only adds behavior.

Finally, you can often do one thing in more than one way. Usually, basic annotations are provided which are used by aggregated annotations to cover recurring use cases. This approach often results in annotated classes which have several basic annotations twice or thrice. Although this approach does not influence the functionality, it is really confusing for (new) users who need to write some piece of code: when should I use which (aggregated) annotations? Users who read the code could ask themselves: why did the author of this code add these annotations more than once?

Proposed Solution

  1. Uncover all conventions, e.g., by a configuration file with default values, such that the conventions are visible for (new) users. In this way, the user can read through the config file and learn what features and conventions are provided by the framework. We follow this approach with our framework Kieker.
  2. Each annotation should explain how it is processed. If you have a bunch of similar annotations, either copy and paste the same text or add a javadoc link to the processing engine where the approach is described once at a central place.
  3. Let annotations only add new behavior. Let them not disable other features.
  4. Only provide one way to do something to reduce confusion and time to read/understand. When the (new) user has several possibilities to do the same thing, it is laborious to understand when to use which approach. If there is no other choice, provide at least recommendations when to use which approach.

How to write a Software Architecture Chapter?

If you need to document your planned or existing software architecture, it’s often difficult to know where to start. And even if you have thought about it some time, you probably don’t know how to structure your text so that a novice reader is able to follow you.

There are a few books out there whose autors describe very well how to write such software architecture desciptions. For example, have a look at “Softwarearchitekturen dokumentieren und kommunizieren” (ISBN-10: 3446443487). They explain in detail which contents are required in which order. However, who has time to read these books and find the relevant paragraphs?

For this reason, I put together the most important parts below. Of course, this overview is from my point of view. Nevertheless, it should give you a faster introduction to this topic compared to finding and reading the relevant paragraphs of interest in a book of many hundreds of pages.

I like templates (but not in the form of cooking or baking recipe). Templates ease the start of writing. They give you a predefined structure that you could easily follow. In this way, you know what to write and in which order. So, here is my template for describing an arbitrary software architecture:

Introduction (without a section title)

  1. Introduce the domain of application of your architecture so that the reader knows what he or she should expect from your description. Otherwise, if you do not give context, the reader might expect everything, but probably not what you want him or her to tell.
  2. One sentence about the content of each following subsection so that the reader knows what you will describe in which order.

Architectural Drivers

  1. Requirements on your architecture, including corresponding justifications
    1. Use phrases like
      • “we expect from your architecture …” or
      • “our architecture must be able to …” or
      • “our architecture should …”
  2. Constraints of your architecture, including corresponding justifications
    1. Your software architecture will not represent the one-and-only architecture which matches to all of the problems in the world. Hence, you should name the constraints of your architecture.

Components of your Architecture

  1. Describe each component in a separate section
    1. Give it a name
    2. Describe what the component represents (and, if useful, what it does intentionally not represent)
    3. Describe the structure of your component, including corresponding justifications
    4. Describe the responsibilities of your component, that means, what are the specific tasks of the component, including corresponding justifications
    5. Describe what tasks are intentionally not part of the component

Progress Monitoring of Loops

When coding a loop, we often insert some logging code for debugging or progress monitoring purposes. The following code snippet shows an example:

for (int i = 0; i < invocations.size(); i++) {
  // loop body begin
  ...
  // loop body end
  System.out.println("current index: "+i); // pure java
  currentTimeInMs = ...;
  if (currentTimeInMs % 1000 == 0) {  // log every econd
    logger.debug("current index: {}", i);    // slf4j
  }
}

Disadvantages of the Naive Approach

However, this approach has some disadvantages. First, the logging code is executed by the same thread which executes the loop body. If the logging code causes an exception, for example, due to a NullPointerException, the execution of the loop body is interrupted. Thus, the execution of the application can break because of some fault in the logging code.

Moreover, the executing thread is thwarted by the addtional logging code. Remember that executing I/O operations, like printing to the console or to a file, is slower by several orders of magnitude. Especially for performance critical code regions, this approach is not recommended.

Finally, the logging code is executed in each and every iteration of the loop although it is often sufficient to log the loop’s state every second or every minute. If you now think of a solution similar to the one shown above in Line 5, then remember that this additional if statement causes even more runtime overhead and further increases the potential for errors.

A Better Approach

A less intrusive, less error-prone, and faster approach is the following. First, we declare a volatile field progressIndex which represents the current index of the loop we want to monitor. For this purpose, we update the value of this field as shown in the following code snippet. We do not insert our original logging code here.

for (int i = 0; i < invocations.size(); i++) {
    progressIndex = i;
    // loop body
}

Instead, we create an additional thread which executes our logging code now. This thread performs a do-while loop as shown in the following code snippet from Line 7 to Line 28. We use `Thread.sleep()` to implement a user-defined time interval (here: 1000 ms). The thread reads the current index of the loop under monitoring indicated by the variable progressIndex in Line 13. As we declared it as volatile, it is synchronized between our new monitoring thread and the application thread(s). We cache the progressIndex as local variable, since the value of this variable can change on each subsequent read-access. In this way, the value is consistent within an iteration of the do-while loop.

To avoid interfering with the application, we declare our new monitoring thread as a deamon. In this way, the thread is automatically terminated by the JVM when all application threads have terminated.

The rest of the do-while loop computes the remaining execution time of the application and executes the domain-specific logging code. In this example, we print out the current number of NoSql queries which were already executed by the application loop.

Thread progressMonitoringThread = new Thread(new Runnable() {
    @Override
    public void run() {
        long lastTimestamp = System.currentTimeMillis();
        int lastProgressIndex = progressIndex;
        
        do {
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                throw new IllegalStateException(e);
            }
            final int localProgressIndex = progressIndex;
            
            long currentTimestamp = System.currentTimeMillis();
            long durationInMs = currentTimestamp - lastTimestamp;
            int count = localProgressIndex - lastProgressIndex;
            if (count <= 0) continue;
            long durationInMsPerElement = durationInMs / count;
            
            long remainingDurationInMs = durationInMsPerElement * (numQueries - (localProgressIndex + 1));
            String message = String.format("Executing query %s/%s (%s sec. remaining)", localProgressIndex + 1,
            numQueries, Duration.ofMillis(remainingDurationInMs).getSeconds());
            System.out.println(message);
            
            lastTimestamp = currentTimestamp;
            lastProgressIndex = localProgressIndex;
        } while (progressIndex < numQueries);
    }
});
progressMonitoringThread.setDaemon(true);

progressMonitoringThread.start()

This approach does not break the execution of the application if the logging code is faulty. In the worst case, our monitoring thread terminates – no interfering with the application. Moreover, the runtime overhead is reduced to a minimum. We only set the progressIndex in each iteration of the application loop. Finally, the approach allows to log the state of the loop at a user-defined time interval. We are not forced to log it on each iteration anymore. Hence, we have changed the type of notification from an event-driven style to a time-driven style.

Join Points and their Pointcuts in AspectJ

In the following list, we name each join point with its corresponding pointcut because it was initially not apparent to me how to capsure all calls (or executions). You cannot use one single * wildcard to match all method and constructor calls. You need to define a composite pointcut which capsures on the one hand all method calls and on the other hand all constructor calls. This differentiation by AspectJ is not directly reflected by the documentation, but only by AspectJ’s different pattern syntax for call (or execution). To capsure method calls, you need to use MethodPattern. To capsure constructor calls, you need to use ConstructorPattern. The complete syntax for all patterns of AspectJ can be found here.

  • method-calls: call(MethodPattern)
  • method-execution: execution(MethodPattern)
  • ctor-calls: call(ConstructorPattern)
  • ctor-execution: execution(ConstructorPattern)
  • static init: staticinitialization(TypePattern)
  • preinit: preinitialization(ConstructorPattern)
  • init: initialization(ConstructorPattern)
  • field-reference (a.k.a. field-read): get(FieldPattern)
  • field-set (a.k.a. field-write): set(FieldPattern)
  • handler: handler(TypePattern)
  • advice-execution: adviceexecution()