sudo -u postgres /Library/PostgreSQL/11/bin/pg_ctl -D /Library/PostgreSQL/11/data start
sudo -u postgres /Library/PostgreSQL/11/bin/pg_ctl -D /Library/PostgreSQL/11/data stop

Linux Tutorial and something else…..
I don't know what's the matter with people: they don't learn by understanding, they learn by some other way — by rote or something. Their knowledge is so fragile! (Feynman)
sudo -u postgres /Library/PostgreSQL/11/bin/pg_ctl -D /Library/PostgreSQL/11/data start
sudo -u postgres /Library/PostgreSQL/11/bin/pg_ctl -D /Library/PostgreSQL/11/data stop
ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools. We will use grep function to only get the list of active SSH sessions on our local host
[root@node3 ~]# ss | grep -i ssh tcp ESTAB 0 0 10.0.2.32:ssh 10.0.2.31:37802 tcp ESTAB 0 64 10.0.2.32:ssh 10.0.2.2:49966 tcp ESTAB 0 0 10.0.2.32:ssh 10.0.2.30:56088
From the above example we know that there are three hosts which are currently connected to our node3. We have active SSH connections from 10.0.2.31, 10.0.2.30 and 10.0.2.2
last searches back through the file /var/log/wtmp (or the file designated by the -f flag) and displays a list of all users logged in (and out) since that file was created. Names of users and tty’s can be given, in which case last will show only those entries matching the arguments.
Using this command you can also get the information about the user using which the SSH connection was created between server and client. So below we know the connection from 10.0.2.31 is done using ‘deepak‘ user, while for other two hosts, ‘root‘ user was used for connecting to node3.
[root@node3 ~]# last -a | grep -i still deepak pts/1 Fri May 31 16:58 still logged in 10.0.2.31 root pts/2 Fri May 31 16:50 still logged in 10.0.2.30 root pts/0 Fri May 31 09:17 still logged in 10.0.2.2
Here I am grepping for a string “still” to get all the patterns with “still logged in“. So now we know we have three active SSH connections from 10.0.2.31, 10.0.2.30 and 10.0.2.2
who is used to show who is logged on on your Linux host. This tool can also give this information
[root@node3 ~]# who root pts/0 2019-05-31 09:17 (10.0.2.2) root pts/1 2019-05-31 16:47 (10.0.2.31) root pts/2 2019-05-31 16:50 (10.0.2.30)
Using this command we also get similar information as from last command. Now you get the user details used for connecting to node3 from source host, also we have terminal information on which the session is still active.
We generally hear terminal as tty but here we see terminal is referenced as pts, but now:
What is the difference between tty and pts?
How to disable or enable individual tty terminal console in Linux?
w displays information about the users currently on the machine, and their processes. This gives more information than who and last command and also serves our purpose to get the list of active SSH connections. Additionally it also gives us the information of the running process on those sessions.
Using w command you will also get the idle time details, i.e. for how long the session is idle. If the SSH session is idle for long period then it is a security breach and it is recommended that such idle SSH session must be killed, you can configure your Linux host to automatically kill such idle SSH session.
[root@node3 ~]# w 17:01:41 up 7:44, 3 users, load average: 0.00, 0.01, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 10.0.2.2 09:17 9:41 0.31s 0.00s less -s deepak pts/1 10.0.2.31 16:58 3:06 0.03s 0.03s -bash root pts/2 10.0.2.30 16:50 5.00s 0.07s 0.02s w
Similar to ss we have netstat command to show active ssh sessions. Actually we can also say that ss is the new version of netstat. Here we can see all the ESTABLISHED SSH sessions from remote hosts to our localhost node3. it is also possible that one or some of these active ssh connections are in hung state so you can configure your host to automatically disconnect or kill these hung or unresponsive ssh sessions in Linux.
[root@node3 ~]# netstat -tnpa | grep 'ESTABLISHED.*sshd' tcp 0 0 10.0.2.32:22 10.0.2.31:37806 ESTABLISHED 10295/sshd: deepak tcp 0 0 10.0.2.32:22 10.0.2.2:49966 ESTABLISHED 4329/sshd: root@pts tcp 0 0 10.0.2.32:22 10.0.2.30:56088 ESTABLISHED 10125/sshd: root@pt
Now to show active ssh sessions, ps command may not give you accurate results like other commands we discussed in this article but it can give you some more additional information i.e. PID of the SSHD process which are currently active and connected.
# ps auxwww | grep sshd: | grep -v grep root 4329 0.0 0.1 154648 5512 ? Ss 09:17 0:00 sshd: root@pts/0 root 10125 0.0 0.1 154648 5532 ? Ss 16:50 0:00 sshd: root@pts/2 root 10295 0.0 0.1 154648 5480 ? Ss 16:58 0:00 sshd: deepak [priv] deepak 10301 0.0 0.0 156732 2964 ? S 16:58 0:00 sshd: deepak@pts/1
To get the ssh connection history you can always check your SSHD logs for more information on connected or disconnected SSH session. Now the sshd log file may vary from distribution to distribution. On my RHEL 7.4 my sshd logs are stored inside /var/log/sshd
Lastly I hope the steps from the article to check active SSH connections and ssh connection history in Linux was helpful. So, let me know your suggestions and feedback using the comment section.
Setting up HTTPS for Spring Boot requires two steps:
We can generate an SSL certificate ourselves (self-signed certificate). Its use is intended just for development and testing purposes. In production, we should use a certificate issued by a trusted Certificate Authority (CA).
In either case, we’re going to see how to enable HTTPS in a Spring Boot application. Examples will be shown both for Spring Boot 1 and Spring Boot 2.
In this tutorial, we’re going to:
If you don’t already have a certificate, follow the step 1a. If you have already got an SSL certificate, you can follow the step 1b.
Throughout this tutorial, I’ll use the following technologies and tools:
Keytool is a certificate management utility provided together with the JDK, so if you have the JDK installed, you should already have keytool available. To check it, try running the command keytool --help from your Terminal prompt. Note that if you are on Windows, you might need to launch it from the \bin folder. For more information about this utility, you can read the official documentation.
On GitHub, you can find the source code for the application we are building in this tutorial.
First of all, we need to generate a pair of cryptographic keys, use them to produce an SSL certificate and store it in a keystore. The keytool documentation defines a keystore as a database of “cryptographic keys, X.509 certificate chains, and trusted certificates”.
To enable HTTPS, we’ll provide a Spring Boot application with this keystore containing the SSL certificate.
The two most common formats used for keystores are JKS, a proprietary format specific for Java, and PKCS12, an industry-standard format. JKS used to be the default choice, but now Oracle recommends to adopt the PKCS12 format. We’re going to see how to use both.
Let’s open our Terminal prompt and write the following command to create a JKS keystore:
keytool -genkeypair -alias tomcat -keyalg RSA -keysize 2048 -keystore keystore.jks -validity 3650 -storepass password
To create a PKCS12 keystore, and we should, the command is the following:
keytool -genkeypair -alias tomcat -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore keystore.p12 -validity 3650 -storepass password
Let’s have a closer look at the command we just run:
genkeypair: generates a key pair;alias: the alias name for the item we are generating;keyalg: the cryptographic algorithm to generate the key pair;keysize: the size of the key. We have used 2048 bits, but 4096 would be a better choice for production;storetype: the type of keystore;keystore: the name of the keystore;validity: validity number of days;storepass: a password for the keystore.When running the previous command, we will be asked to input some information, but we are free to skip all of it (just press Return to skip an option). When asked if the information is correct, we should type yes. Finally, we hit return to use the keystore password as key password as well.
What is your first and last name?
[Unknown]:
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]:
What is the name of your City or Locality?
[Unknown]:
What is the name of your State or Province?
[Unknown]:
What is the two-letter country code for this unit?
[Unknown]:
Is CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
[no]: yes
Enter key password for <tomcat>
(RETURN if same as keystore password):
At the end of this operation, we’ll get a keystore containing a brand new SSL certificate.
To check the content of the keystore following the JKS format, we can use keytool again:
keytool -list -v -keystore keystore.jks
To test the content of a keystore following the PKCS12 format:
keytool -list -v -storetype pkcs12 -keystore keystore.p12
Should we have already a JKS keystore, we have the option to migrate it to PKCS12; keytool has a convenient command for that:
keytool -importkeystore -srckeystore keystore.jks -destkeystore keystore.p12 -deststoretype pkcs12
In case we have already got an SSL certificate, for example, one issued by Let’s Encrypt, we can import it into a keystore and use it to enable HTTPS in a Spring Boot application.
We can use keytool to import our certificate in a new keystore.
keytool -import -alias tomcat -file myCertificate.crt -keystore keystore.p12 -storepass password
To get more information about the keystore and its format, please refer to the previous section.
Whether our keystore contains a self-signed certificate or one issued by a trusted Certificate Authority, we can now set up Spring Boot to accept requests over HTTPS instead of HTTP by using that certificate.
The first thing to do is placing the keystore file inside the Spring Boot project. We want to put it in the resources folder or the root folder.
Then, we configure the server to use our brand new keystore and enable https. Let’s go through the steps both for Spring Boot 1 and Spring Boot 2.
Let’s open our application.properties file (or application.yml) and define the following properties:
server.port=8443
server.ssl.key-store-type=PKCS12
server.ssl.key-store=classpath:keystore.p12
server.ssl.key-store-password=password
server.ssl.key-alias=tomcat
security.require-ssl=trueTo enable HTTPS for our Spring Boot 2 application, let’s open our application.yml file (or application.properties) and define the following properties:
server:
ssl:
key-store: classpath:keystore.p12
key-store-password: password
key-store-type: pkcs12
key-alias: tomcat
key-password: password
port: 8443Let’s have a closer look at the SSL configuration we have just defined in our Spring Boot application properties.
server.port: the port on which the server is listening. We have used 8443 rather than the default 8080 port.server.ssl.key-store: the path to the key store that contains the SSL certificate. In our example, we want Spring Boot to look for it in the classpath.server.ssl.key-store-password: the password used to access the key store.server.ssl.key-store-type: the type of the key store (JKS or PKCS12).server.ssl.key-alias: the alias that identifies the key in the key store.server.ssl.key-password: the password used to access the key in the key store.When using Spring Security, we can configure it to require automatically block any request coming from a non-secure HTTP channel.
In a Spring Boot 1 application, we can achieve that by setting the security.require-ssl property to true, without explicitly touching our Spring Security configuration class.
To achieve the same result in a Spring Boot 2 application, we need to extend the WebSecurityConfigurerAdapter class, since the security.require-ssl property has been deprecated.
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.requiresChannel()
.anyRequest()
.requiresSecure();
}
}For more information about how to configure SSL in Spring Boot, you can have a look at the Reference Guide. If you want to find out which properties are available to configure SSL, you can refer to the definition in the code-base.
Congratulations! You have successfully enabled HTTPS in your Spring Boot application! Give it a try: run the application, open your browser and check if everything works as it should.
Now that we have enabled HTTPS in our Spring Boot application and blocked any HTTP request, we want to redirect all traffic to HTTPS.
Spring allows defining just one network connector in application.properties (or application.yml). Since we have used it for HTTPS, we have to set the HTTP connector programmatically for our Tomcat web server.
The implementations for Spring Boot 1 and Spring Boot 2 are almost the same. The only difference is that some classes for server configuration have been renamed in Spring Boot 2.
@Configuration
public class ServerConfig {
@Bean
public EmbeddedServletContainerFactory servletContainer() {
TomcatEmbeddedServletContainerFactory tomcat = new TomcatEmbeddedServletContainerFactory() {
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint("CONFIDENTIAL");
SecurityCollection collection = new SecurityCollection();
collection.addPattern("/*");
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};
tomcat.addAdditionalTomcatConnectors(getHttpConnector());
return tomcat;
}
private Connector getHttpConnector() {
Connector connector = new Connector("org.apache.coyote.http11.Http11NioProtocol");
connector.setScheme("http");
connector.setPort(8080);
connector.setSecure(false);
connector.setRedirectPort(8443);
return connector;
}
}@Configuration
public class ServerConfig {
@Bean
public ServletWebServerFactory servletContainer() {
TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory() {
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint("CONFIDENTIAL");
SecurityCollection collection = new SecurityCollection();
collection.addPattern("/*");
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};
tomcat.addAdditionalTomcatConnectors(getHttpConnector());
return tomcat;
}
private Connector getHttpConnector() {
Connector connector = new Connector(TomcatServletWebServerFactory.DEFAULT_PROTOCOL);
connector.setScheme("http");
connector.setPort(8080);
connector.setSecure(false);
connector.setRedirectPort(8443);
return connector;
}
}When using a self-signed SSL certificate, our browser won’t trust our application and will warn the user that it’s not secure. And that’ll be the same with any other client.
It’s possible to make a client trust our application by providing it with our certificate.
We have stored our certificate inside a keystore, so we need to extract it. Again, keytool supports us very well:
keytool -export -keystore keystore.jks -alias tomcat -file myCertificate.crt
The keystore can be in JKS or PKCS12 format. During the execution of this command, keytool will ask us for the keystore password that we set at the beginning of this tutorial (the extremely secure password).
Now we can import our certificate into our client. Later, we’ll see how to import the certificate into the JRE in case we need it to trust our application.
When using a keystore in the industry-standard PKCS12 format, we should be able to use it directly without extracting the certificate.
I suggest you check the official guide on how to import a PKCS12 file into your specific client. On macOS, for example, we can directly import a certificate into the Keychain Access (which browsers like Safari, Chrome and Opera rely on to manage certificates).
If deploying the application on localhost, we may need to do a further step from our browser: enabling insecure connections with localhost.
In Firefox, we are shown an alert message. To access the application, we need to explicitly define an exception for it and make Firefox trust the certificate.
In Chrome, we can write the following URL in the search bar: chrome://flags/#allow-insecure-localhost and activate the relative option.
To make the JRE trust our certificate, we need to import it inside cacerts: the JRE trust store in charge of holding all certificates that can be trusted.
First, we need to know the path to our JDK home. A quick way to find it, if we are using Eclipse or STS as our IDE, is by going to Preferences > Java > Installed JREs. If using IntelliJ IDEA, we can access this information by going to Project Structure > SDKs and look at the value of the JDK home path field.
On macOS, it could be something like /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home. In the following, we’ll refer to this location by using the placeholder $JDK_HOME.
Then, from our Terminal prompt, let’s insert the following command (we might need to run it with administrator privileges by prefixing it with sudo):
keytool -importcert -file myCertificate.crt -alias tomcat -keystore $JDK_HOME/jre/lib/security/cacerts
We’ll be asked to input the JRE keystore password. If you have never changed it, it should be the default one: changeit or changeme, depending on the operating system. Finally, keytool will ask if you want to trust this certificate: let’s say yes.
If everything went right, we’d see the message Certificate was added to keystore. Great!
In this tutorial, we have seen how to generate a self-signed SSL certificate, how to import an existing certificate into a keystore, how to use it to enable HTTPS inside a Spring Boot application, how to redirect HTTP to HTTPS and how to extract and distribute the certificate to clients.
On GitHub, you can find the source code for the application we have built in this tutorial.
If you want to protect the access to some resources of your application, consider using Keycloak for the authentication and authorization of the users visiting your Spring Boot or Spring Security application.

















The Java Platform Module System (JPMS) brings modularization to Java and the JVM and it changes how we program in the large. To get the most out of it, we need to know it well, and the first step is to learn the basics. In this tutorial I’ll first show you a simple Hello World example and then we’ll take an existing demo application and modularize it with Java 9. We will create module declarations ( module–info.java) and use the module path to compile, package, and run the application – code first, explanations second, so you can cut to the chase.
I use two projects in this tutorial and both can be found on GitHub: The first is a very simple Hello World example, the other the ServiceMonitor, which is the same one I use in my book on the module system. Check them out if you want to take a closer look. All commands like javac, jar, and java refer to the Java 9 variants.
Contents [hide]
Let’s start with the simplest possible application, one that prints Hello, modular World! Here’s the class:
|
package org.codefx.demo.jpms;
public class HelloModularWorld {
public static void main(String[] args) {
System.out.println(“Hello, modular World!”);
}
}
|
To become a module, it needs a module–info.java in the project’s root source directory:
|
module org.codefx.demo.jpms_hello_world {
// this module only needs types from the base module ‘java.base’;
// because every Java module needs ‘java.base’, it is not necessary
// to explicitly require it – I do it nonetheless for demo purposes
requires java.base;
// this export makes little sense for the application,
// but once again, I do this for demo purposes
exports org.codefx.demo.jpms;
}
|
With the common src/main/java directory structure, the program’s directory layout looks as follows:

These are the commands to compile, package and launch it:
|
$ javac
–d target/classes
${source–files}
$ jar —create
—file target/jpms–hello–world.jar
—main–class org.codefx.demo.jpms.HelloModularWorld
–C target/classes .
$ java
—module–path target/jpms–hello–world.jar
—module org.codefx.demo.jpms_hello_world
|
Very similar to what we would have done for a non-modular application, except we’re now using something called a “module path” and can define the project’s main class (without a manifest). Let’s see how that works.
Modules are like JARs with additional characteristics
The basic building block of the JPMS are modules (surprise!). Like JARs, they are a container for types and resources; but unlike JARs, they have additional characteristics – these are the most fundamental ones:
The JDK was split into about a hundred so-called platform modules. You can list them with java —list–modules and look at an individual module with java —describe–module ${module}. Go ahead, give it a try with java.sql or java.logging:
|
$ java —describe–module java.sql
> java.sql@9
> exports java.sql
> exports javax.sql
> exports javax.transaction.xa
> requires java.logging transitive
> requires java.base mandated
> requires java.xml transitive
> uses java.sql.Driver
|
A module’s properties are defined in a module declaration, a file module–info.java in the project’s root, which looks as follows:
|
module ${module–name} {
requires ${module–name};
exports ${package–name};
}
|
It gets compiled into a module–info.class, called module descriptor, and ends up in the JAR’s root. This descriptor is the only difference between a plain JAR and a modular JAR.
Let’s go through the three module properties one by one: name, dependencies, exports.
The most basic property that JARs are missing is a name that compiler and JVM can use to identify it with. It is hence the most prominent characteristic of a module. We will have the possibility and even the obligation to give every module we create a name.
The best name for a module is the reverse-domain naming scheme that is already commonly used for packages
Naming a module will often be pretty natural as most tools we use on a daily basis, be it IDEs, build tools, or even issue trackers and version control systems, already have us name our projects. But while it makes sense to take that name as a springboard on the search for a module name, it is important to choose wisely!
The module system leans heavily on a module’s name. Conflicting or evolving names in particular cause trouble, so it is important that the name is:
The best way to achieve that is the reverse-domain naming scheme that is already commonly used for packages:
|
module org.codefx.demo.jpms {
}
|
All dependencies have to be made explicit with requires directives
Another thing we missed in JARs was the ability to declare dependencies, but with the module system, these times are over: Dependencies have to be made explicit – all of them, on JDK modules as well as on third-party libraries or frameworks.
Dependencies are declared with requires directives, which consist of the keyword itself followed by a module name. When scanning modules, the JPMS builds a readability graph, where modules are nodes and requires directives get turned into so-called readability edges – if module org.codefx.demo.jpms requires module java.base, then at runtime org.codefx.demo.jpms reads java.base.
The module system will throw an error if it cannot find a required module with the right name, which means compiling as well as launching an application will fail if modules are missing. This achieves reliable configuration one of the goals of the module system, but can be prohibitively strict – check my post on optional dependencies to see a more lenient alternative.
All types the Hello World example needs can be found in the JDK module java.base, the so-called base module. Because it contains essential types like Object, all Java code needs it and so it doesn’t have to be required explicitly. Still, I do it in this case to show you a requires directive:
|
module org.codefx.demo.jpms {
requires java.base;
}
|
A module’s API is defined by its exports directives
A module lists the packages it exports. For code in one module (say org.codefx.demo.jpms) to access types in another (say String in java.base), the following accessibility rules must be fulfilled:
Reflection lost its superpowers
If any of these rules are violated at compile or run time, the module systems throws an error. This means that public is no longer really public. A public type in a non-exported package is as inaccessible to the outside world as a non-public type in an exported package. Also note that reflection lost its superpowers. It is bound by the exact same accessibility rules unless command line flags are used.
Since our example has no meaningful API, no outside code needs to access it and so we don’t actually have to export anything. Once again I’ll do it nonetheless for demonstration purposes:
|
module org.codefx.demo.jpms_hello_world {
requires java.base;
exports org.codefx.demo.jpms;
}
|
We now know how we can define modules and their essential properties. What’s still a little unclear is how exactly we tell the compiler and runtime about them. The answer is a new concept that parallels the class path:
The module path is a list whose elements are artifacts or directories that contain artifacts. Depending on the operating system, module path elements are either separated by : (Unix-based) or ; (Windows). It is used by the module system to locate required modules that are not found among the platform modules. Both javac and java as well as other module-related commands can process it – the command line options are —module–path and –p.
All artifacts on the module path are turned into modules. This is even true for plain JARs, which get turned into automatic modules.
Compiling works much like without the module system:
|
$ javac
–d target/classes
${source–files}
|
(You of course have to replace ${source–files} with an actual enumeration of the involved files, but that crowds the examples, so I don’t do it here.)
The module system kicks in as soon as a module–info.java is among the source files. All non-JDK dependencies the module under compilation requires need to be on the module path. For the Hello World example, there are no such dependencies.
Packaging with jar is unchanged as well. The only difference is that we no longer need a manifest to declare an application’s entry point – we can use —main–class for that:
|
$ jar —create
—file target/jpms–hello–world.jar
—main–class org.codefx.demo.jpms.HelloModularWorld
–C target/classes .
|
Finally, launching looks a little different. We use the module path instead of the class path to tell the JPMS where to find modules. All we need to do beyond that is to name the main module with —module:
|
$ java
—module–path target/jpms–hello–world.jar
—module org.codefx.demo.jpms_hello_world
|
And that’s it! We’ve created a very simple, but nonetheless modular Hello-World application and successfully build and launched it. Now it’s time to turn to a slightly less trivial example to see mechanisms like dependencies and exports in action.
Let’s imagine a network of services that cooperate to delight our users; maybe a social network or a video platform. We want to monitor those services to determine how healthy the system is and spot problems when they occur (instead of when customers report them). This is where the example application, the ServiceMonitor comes in: It monitors these services (another big surprise).
As luck would have it, the services already collect the data we want, so all the ServiceMonitor needs to do is query them periodically. Unfortunately not all services expose the same REST API – two generations are in use, Alpha and Beta. That’s why ServiceObserver is an interface with two implementations.
Once we have the diagnostic data, in the form of a DiagnosticDataPoint, they can be fed to a Statistician, which aggregates them to Statistics. These, in turn, are stored in a StatisticsRepository as well as made available via REST by MonitorServer. The Monitor class ties everything together.
All in all, we end up with these types:

The application depends on the Spark micro web framework and we reference it by the module name spark.core. It can be found in the libs directory together with its transitive dependencies.
With what we learned so far, we already know how to organize the application as a single module. First, we create the module declaration module–info.java in the project’s root:
|
module monitor {
requires spark.core;
}
|
Note that we should choose a module name like org.codefx.demo.monitor, but that would crowd the examples, so I’ll stick to the shorter monitor. As explained, it requires spark.core and because the application has no meaningful API, it exports no packages.
We can then compile, package, and run it as follows:
|
$ javac
—module–path libs
–d classes/monitor
${source–files}
$ jar —create
—file mods/monitor.jar
—main–class monitor.Main
–C classes/monitor .
$ java
—module–path mods
—module monitor
|
As you can see, we no longer use Maven’s target directory and instead create classes in classes and modules in mods. This makes the examples easier to parse. Note that unlike earlier, we already have to use the module path during compilation because this application has non-JDK dependencies.
And with that we’ve created a single-module ServiceMonitor!
Now that we got one module going, it’s time to really start using the module system and split the ServiceMonitor up. For an application of this size it is of course ludicrous to turn it into several modules, but it’s a demo, so here we go.
The most common way to modularize applications is a separation by concerns. ServiceMonitor has the following, with the related types in parenthesis:
But not only the domain logic generates requirements. There are also technical ones:
This results in the following modules with the mentioned publicly visible types:
Superimposing these modules over the class diagram, it is easy to see the module dependencies emerge:

A real-life project consists of myriad files of many different types. Obviously, source files are the most important ones but nonetheless only one kind of many – others are test sources, resources, build scripts or project descriptions, documentation, source control information, and many others. Any project has to choose a directory structure to organize those files and it is important to make sure it does not clash with the module system’s characteristics.
If you have been following the module system’s development under Project Jigsaw and studied the official quick start guide or some early tutorials, you might have noticed that they use a particular directory structure, where there’s a src directory with a subdirectory for each project. That way ServiceMonitor would look as follows:
|
ServiceMonitor
+ classes
+ mods
– src
+ monitor
– monitor.observer
– monitor
– observer
DiagnosticDataPoint.java
ServiceObserver.java
module–info.java
+ monitor.observer.alpha
+ monitor.observer.beta
+ monitor.persistence
+ monitor.rest
+ monitor.statistics
– test–src
+ monitor
+ monitor.observer
+ monitor.observer.alpha
+ monitor.observer.beta
+ monitor.persistence
+ monitor.rest
+ monitor.statistics
|
This results in a hierarchy concern/module and I don’t like it. Most projects that consist of several sub-projects (what we now call modules) prefer separate root directories, where each contains a single module’s sources, tests, resources, and everything else mentioned earlier. They use a hierarchy module/concern and this is what established project structures provide.
The default directory structure, implicitly understood by tools like Maven and Gradle, implement that hierarchy. First and foremost, they give each module its own directory tree. In that tree the src directory contains production code and resources (in main/java and main/resources, respectively) as well as test code and resources (in test/java and test/resources, respectively):
|
ServiceMonitor
+ monitor
– monitor.observer
– src
– main
– java
– monitor
– observer
DiagnosticDataPoint.java
ServiceObserver.java
module–info.java
+ resources
+ test
+ java
+ resources
+ target
+ monitor.observer.alpha
+ monitor.observer.beta
+ monitor.persistence
+ monitor.rest
+ monitor.statistics
|
I will organize the ServiceMonitor almost like that, with the only difference that I will create the bytecode in a directory classes and JARS in a directory mods, which are both right below ServiceMonitor, because that makes the scripts shorter and more readable.
Let’s now see what those declarations infos have to contain and how we can compile and run the application.
We’ve already covered how modules are declared using module–info.java, so there’s no need to go into details. Once you’ve figured out how modules need to depend on one another (your build tool should know that; otherwise ask JDeps), you can put in requires directives and the necessary exports emerge naturally from imports across module boundaries.
|
module monitor.observer {
exports monitor.observer;
}
module monitor.observer.alpha {
requires monitor.observer;
exports monitor.observer.alpha;
}
module monitor.observer.beta {
requires monitor.observer;
exports monitor.observer.beta;
}
module monitor.statistics {
requires monitor.observer;
exports monitor.statistics;
}
module monitor.persistence {
requires monitor.statistics;
exports monitor.persistence;
}
module monitor.rest {
requires spark.core;
requires monitor.statistics;
exports monitor.rest;
}
module monitor {
requires monitor.observer;
requires monitor.observer.alpha;
requires monitor.observer.beta;
requires monitor.statistics;
requires monitor.persistence;
requires monitor.rest;
}
|
By the way, you can use JDeps to create an initial set of module declarations. Whether created automatically or manually, in a real-life project you should verify whether your dependencies and APIs are as you want them to be. It is likely that over time, some quick fixes introduced relationships that you’d rather get rid of. Do that now or create some backlog issues.
Very similar to before when it was only a single module, but more often:
|
$ javac
–d classes/monitor.observer
${source–files}
$ jar —create
—file mods/monitor.observer.jar
–C classes/monitor.observer .
# monitor.observer.alpha depends on monitor.observer,
# so we place ‘mods’, which contains monitor.observer.jar,
# on the module path
$ javac
—module–path mods
–d classes/monitor.observer.alpha
${source–files}
$ jar —create
—file mods/monitor.observer.alpha.jar
–C classes/monitor.observer.alpha .
# more of the same … until we come to monitor,
# which once again defines a main class
$ javac
—module–path mods
–d classes/monitor
${source–files}
$ jar —create
—file mods/monitor.jar
—main–class monitor.Main
–C classes/monitor .
|
Congratulations, you’ve got the basics covered! You now know how to organize, declare, compile, package, and launch modules and understand what role the module path, the readability graph, and modular JARs play.
If you weren’t so damn curious this post could be over now, but instead I’m going to show you a few of the more advanced features, so you know what to read about next.
The ServiceMonitor module monitor.observer.alpha describes itself as follows:
|
module monitor.observer.alpha {
requires monitor.observer;
exports monitor.observer.alpha;
}
|
Instead it should actually do this:
|
module monitor.observer.alpha {
requires transitive monitor.observer;
exports monitor.observer.alpha;
}
|
Spot the transitive in there? It makes sure that any module reading monitor.observer.alpha also reads monitor.observer. Why would you do that? Here’s a method from alpha‘s public API:
|
public static Optional<ServiceObserver> createIfAlphaService(String service) {
// …
}
|
It returns an Optional<ServiceObserver>, but ServiceObserver comes from the monitor.observer module – that means every module that wants to call alpha‘s createIfAlphaService needs to read monitor.observer as well or such code won’t compile. That’s pretty inconvenient, so modules like alpha that use another module’s type in their own public API should generally require that module with the transitive modifier.
There are more uses for implied readability.
This is quite straight-forward: If you want to compile against a module’s types, but don’t want to force its presence at runtime you can mark your dependency as being optional with the static modifier:
|
module monitor {
requires monitor.observer;
requires static monitor.observer.alpha;
requires static monitor.observer.beta;
requires monitor.statistics;
requires monitor.persistence;
requires static monitor.rest;
}
|
In this case monitor seems to be ok with the alpha and beta observer implementations possibly being absent and it looks like the REST endpoint is optional, too.
There are a few things to consider when coding against optional dependencies.
Regular exports have you make the decision whether a package’s public types are accessible only within the same module or to all modules. Sometimes you need something in between, though. If you’re shipping a bunch of modules, you might end up in the situation, where you’d like to share code between those modules but not outside of it. Qualified exports to the rescue!
|
module monitor.util {
exports monitor.util to monitor, monitor.statistics;
}
|
This way only monitor and monitor.statistics can access the monitor.util package.
I said earlier that reflection’s superpowers were revoked – it now has to play by the same rules as regular access. Reflection still has a special place in Java’s ecosystem, though, as it enables frameworks like Hibernate, Spring and so many others.
The bridge between those two poles are open packages and modules:
|
module monitor.persistence {
opens monitor.persistence.dtos;
}
// or even
open module monitor.persistence.dtos { }
|
An open package is inaccessible at compile time (so you can’t write code against its types), but accessible at run time (so reflection works). More than just being accessible, it allows reflective access to non-public types and members (this is called deem reflection). Open packages can be qualified just like exports and open modules simply open all their packages.
Instead of having the main module monitor depend on monitor.observer.alpha and monitor.observer.beta, so it can create instances of AlphaServiceObserver and BetaServiceObserver, it could let the module system make that connection:
|
module monitor {
requires monitor.observer;
// monitor wants to use a service
uses monitor.observer.ServiceObserverFactory;
requires monitor.statistics;
requires monitor.persistence;
requires monitor.rest;
}
module monitor.observer.alpha {
requires monitor.observer;
// alpha provides a service implementation
provides monitor.observer.ServiceObserverFactory
with monitor.observer.alpha.AlphaServiceObserverFactory;
}
module monitor.observer.beta {
requires monitor.observer;
// beta provides a service implementation
provides monitor.observer.ServiceObserverFactory
with monitor.observer.beta.BetaServiceObserverFactory;
}
|
This way, monitor can do the following to get an instance of each provided observer factory:
|
List<ServiceObserverFactory> observerFactories = ServiceLoader
.load(ServiceObserverFactory.class).stream()
.map(Provider::get)
.collect(toList());
|
It uses the ServiceLoader API, which exists since Java 6, to inform the module system that it needs all implementations of ServiceObserverFactory. The JPMS will then track down all modules in the readability graph that provide that service, create an instance of each and return them.
There are two particularly interesting consequences:
Services are a wonderful way to decouple modules and its awesome that the module system gives this mostly ignored concept a second life and puts it into a prominent place.
Ok, we’re really done now and you’ve learned a lot. Quick recap:
If you want to learn more about the module system, read the posts I linked above, check the JPMS tag, or get my book The Java Module System (Manning). Also, be aware that migrating to Java 9 can be challenging – check my migration guide for details.