Code-First Java Module System Tutorial
The Java Platform Module System (JPMS) brings modularization to Java and the JVM and it changes how we program in the large. To get the most out of it, we need to know it well, and the first step is to learn the basics. In this tutorial I’ll first show you a simple Hello World example and then we’ll take an existing demo application and modularize it with Java 9. We will create module declarations ( module–info.java) and use the module path to compile, package, and run the application – code first, explanations second, so you can cut to the chase.
I use two projects in this tutorial and both can be found on GitHub: The first is a very simple Hello World example, the other the ServiceMonitor, which is the same one I use in my book on the module system. Check them out if you want to take a closer look. All commands like javac, jar, and java refer to the Java 9 variants.
Contents [hide]
Hello, Modular World
Let’s start with the simplest possible application, one that prints Hello, modular World! Here’s the class:
package org.codefx.demo.jpms;
public class HelloModularWorld {
public static void main(String[] args) {
System.out.println(“Hello, modular World!”);
}
}
|
To become a module, it needs a module–info.java in the project’s root source directory:
module org.codefx.demo.jpms_hello_world {
// this module only needs types from the base module ‘java.base’;
// because every Java module needs ‘java.base’, it is not necessary
// to explicitly require it – I do it nonetheless for demo purposes
requires java.base;
// this export makes little sense for the application,
// but once again, I do this for demo purposes
exports org.codefx.demo.jpms;
}
|
With the common src/main/java directory structure, the program’s directory layout looks as follows:
These are the commands to compile, package and launch it:
$ javac
–d target/classes
${source–files}
$ jar —create
—file target/jpms–hello–world.jar
—main–class org.codefx.demo.jpms.HelloModularWorld
–C target/classes .
$ java
—module–path target/jpms–hello–world.jar
—module org.codefx.demo.jpms_hello_world
|
Very similar to what we would have done for a non-modular application, except we’re now using something called a “module path” and can define the project’s main class (without a manifest). Let’s see how that works.
Modules
Modules are like JARs with additional characteristics
The basic building block of the JPMS are modules (surprise!). Like JARs, they are a container for types and resources; but unlike JARs, they have additional characteristics – these are the most fundamental ones:
- a name, preferably one that is globally unique
- declarations of dependencies on other modules
- a clearly defined API that consists of exported packages
The JDK was split into about a hundred so-called platform modules. You can list them with java —list–modules and look at an individual module with java —describe–module ${module}. Go ahead, give it a try with java.sql or java.logging:
$ java —describe–module java.sql
> java.sql@9
> exports java.sql
> exports javax.sql
> exports javax.transaction.xa
> requires java.logging transitive
> requires java.base mandated
> requires java.xml transitive
> uses java.sql.Driver
|
A module’s properties are defined in a module declaration, a file module–info.java in the project’s root, which looks as follows:
module ${module–name} {
requires ${module–name};
exports ${package–name};
}
|
It gets compiled into a module–info.class, called module descriptor, and ends up in the JAR’s root. This descriptor is the only difference between a plain JAR and a modular JAR.
Let’s go through the three module properties one by one: name, dependencies, exports.
Name
The most basic property that JARs are missing is a name that compiler and JVM can use to identify it with. It is hence the most prominent characteristic of a module. We will have the possibility and even the obligation to give every module we create a name.
The best name for a module is the reverse-domain naming scheme that is already commonly used for packages
Naming a module will often be pretty natural as most tools we use on a daily basis, be it IDEs, build tools, or even issue trackers and version control systems, already have us name our projects. But while it makes sense to take that name as a springboard on the search for a module name, it is important to choose wisely!
The module system leans heavily on a module’s name. Conflicting or evolving names in particular cause trouble, so it is important that the name is:
- globally unique
- stable
The best way to achieve that is the reverse-domain naming scheme that is already commonly used for packages:
module org.codefx.demo.jpms {
}
|
Dependencies And Readability
All dependencies have to be made explicit with requires directives
Another thing we missed in JARs was the ability to declare dependencies, but with the module system, these times are over: Dependencies have to be made explicit – all of them, on JDK modules as well as on third-party libraries or frameworks.
Dependencies are declared with requires directives, which consist of the keyword itself followed by a module name. When scanning modules, the JPMS builds a readability graph, where modules are nodes and requires directives get turned into so-called readability edges – if module org.codefx.demo.jpms requires module java.base, then at runtime org.codefx.demo.jpms reads java.base.
The module system will throw an error if it cannot find a required module with the right name, which means compiling as well as launching an application will fail if modules are missing. This achieves reliable configuration one of the goals of the module system, but can be prohibitively strict – check my post on optional dependencies to see a more lenient alternative.
All types the Hello World example needs can be found in the JDK module java.base, the so-called base module. Because it contains essential types like Object, all Java code needs it and so it doesn’t have to be required explicitly. Still, I do it in this case to show you a requires directive:
module org.codefx.demo.jpms {
requires java.base;
}
|
Exports And Accessibility
A module’s API is defined by its exports directives
A module lists the packages it exports. For code in one module (say org.codefx.demo.jpms) to access types in another (say String in java.base), the following accessibility rules must be fulfilled:
- the accessed type ( String) must be public
- the package containing the type ( java.lang) must be exported by its module (java.base)
- the accessing module (org.codefx.demo.jpms) must read the accessed one (java.base), which is typically achieved by requiring it
Reflection lost its superpowers
If any of these rules are violated at compile or run time, the module systems throws an error. This means that public is no longer really public. A public type in a non-exported package is as inaccessible to the outside world as a non-public type in an exported package. Also note that reflection lost its superpowers. It is bound by the exact same accessibility rules unless command line flags are used.
Since our example has no meaningful API, no outside code needs to access it and so we don’t actually have to export anything. Once again I’ll do it nonetheless for demonstration purposes:
module org.codefx.demo.jpms_hello_world {
requires java.base;
exports org.codefx.demo.jpms;
}
|
Module Path
We now know how we can define modules and their essential properties. What’s still a little unclear is how exactly we tell the compiler and runtime about them. The answer is a new concept that parallels the class path:
The module path is a list whose elements are artifacts or directories that contain artifacts. Depending on the operating system, module path elements are either separated by : (Unix-based) or ; (Windows). It is used by the module system to locate required modules that are not found among the platform modules. Both javac and java as well as other module-related commands can process it – the command line options are —module–path and –p.
All artifacts on the module path are turned into modules. This is even true for plain JARs, which get turned into automatic modules.
Compiling, Packaging, Running
Compiling works much like without the module system:
$ javac
–d target/classes
${source–files}
|
(You of course have to replace ${source–files} with an actual enumeration of the involved files, but that crowds the examples, so I don’t do it here.)
The module system kicks in as soon as a module–info.java is among the source files. All non-JDK dependencies the module under compilation requires need to be on the module path. For the Hello World example, there are no such dependencies.
Packaging with jar is unchanged as well. The only difference is that we no longer need a manifest to declare an application’s entry point – we can use —main–class for that:
$ jar —create
—file target/jpms–hello–world.jar
—main–class org.codefx.demo.jpms.HelloModularWorld
–C target/classes .
|
Finally, launching looks a little different. We use the module path instead of the class path to tell the JPMS where to find modules. All we need to do beyond that is to name the main module with —module:
$ java
—module–path target/jpms–hello–world.jar
—module org.codefx.demo.jpms_hello_world
|
And that’s it! We’ve created a very simple, but nonetheless modular Hello-World application and successfully build and launched it. Now it’s time to turn to a slightly less trivial example to see mechanisms like dependencies and exports in action.
The ServiceMonitor
Let’s imagine a network of services that cooperate to delight our users; maybe a social network or a video platform. We want to monitor those services to determine how healthy the system is and spot problems when they occur (instead of when customers report them). This is where the example application, the ServiceMonitor comes in: It monitors these services (another big surprise).
As luck would have it, the services already collect the data we want, so all the ServiceMonitor needs to do is query them periodically. Unfortunately not all services expose the same REST API – two generations are in use, Alpha and Beta. That’s why ServiceObserver is an interface with two implementations.
Once we have the diagnostic data, in the form of a DiagnosticDataPoint, they can be fed to a Statistician, which aggregates them to Statistics. These, in turn, are stored in a StatisticsRepository as well as made available via REST by MonitorServer. The Monitor class ties everything together.
All in all, we end up with these types:
- DiagnosticDataPoint: service data for a time interval
- ServiceObserver: interface for service observation that returns DiagnosticDataPoint
- AlphaServiceObserver and BetaServiceObserver: each observes a variant of services
- Statistician: computes Statistics from DiagnosticDataPoint
- Statistics: holds the computed statistics
- StatisticsRepository: stores and retrieve Statistics
- MonitorServer: answers REST calls for the statistics
- Monitor: ties everything together
The application depends on the Spark micro web framework and we reference it by the module name spark.core. It can be found in the libs directory together with its transitive dependencies.
With what we learned so far, we already know how to organize the application as a single module. First, we create the module declaration module–info.java in the project’s root:
module monitor {
requires spark.core;
}
|
Note that we should choose a module name like org.codefx.demo.monitor, but that would crowd the examples, so I’ll stick to the shorter monitor. As explained, it requires spark.core and because the application has no meaningful API, it exports no packages.
We can then compile, package, and run it as follows:
$ javac
—module–path libs
–d classes/monitor
${source–files}
$ jar —create
—file mods/monitor.jar
—main–class monitor.Main
–C classes/monitor .
$ java
—module–path mods
—module monitor
|
As you can see, we no longer use Maven’s target directory and instead create classes in classes and modules in mods. This makes the examples easier to parse. Note that unlike earlier, we already have to use the module path during compilation because this application has non-JDK dependencies.
And with that we’ve created a single-module ServiceMonitor!
Splitting Into Modules
Now that we got one module going, it’s time to really start using the module system and split the ServiceMonitor up. For an application of this size it is of course ludicrous to turn it into several modules, but it’s a demo, so here we go.
The most common way to modularize applications is a separation by concerns. ServiceMonitor has the following, with the related types in parenthesis:
- collecting data from services ( ServiceObserver, DiagnosticDataPoint)
- aggregating data into statistics ( Statistician, Statistics)
- persisting statistics ( StatisticsRepository)
- exposing statistics via a REST API ( MonitorServer)
But not only the domain logic generates requirements. There are also technical ones:
- data collection must be hidden behind an API
- Alpha and Beta services each require a separate implementation of that API ( AlphaServiceObserver and BetaServiceObserver)
- orchestration of all concerns ( Monitor)
This results in the following modules with the mentioned publicly visible types:
- monitor.observer ( ServiceObserver, DiagnosticDataPoint)
- monitor.observer.alpha ( AlphaServiceObserver)
- monitor.observer.beta ( BetaServiceObserver)
- monitor.statistics ( Statistician, Statistics)
- monitor.persistence ( StatisticsRepository)
- monitor.rest ( MonitorServer)
- monitor ( Monitor)
Superimposing these modules over the class diagram, it is easy to see the module dependencies emerge:
Reorganizing Source Code
A real-life project consists of myriad files of many different types. Obviously, source files are the most important ones but nonetheless only one kind of many – others are test sources, resources, build scripts or project descriptions, documentation, source control information, and many others. Any project has to choose a directory structure to organize those files and it is important to make sure it does not clash with the module system’s characteristics.
If you have been following the module system’s development under Project Jigsaw and studied the official quick start guide or some early tutorials, you might have noticed that they use a particular directory structure, where there’s a src directory with a subdirectory for each project. That way ServiceMonitor would look as follows:
ServiceMonitor
+ classes
+ mods
– src
+ monitor
– monitor.observer
– monitor
– observer
DiagnosticDataPoint.java
ServiceObserver.java
module–info.java
+ monitor.observer.alpha
+ monitor.observer.beta
+ monitor.persistence
+ monitor.rest
+ monitor.statistics
– test–src
+ monitor
+ monitor.observer
+ monitor.observer.alpha
+ monitor.observer.beta
+ monitor.persistence
+ monitor.rest
+ monitor.statistics
|
This results in a hierarchy concern/module and I don’t like it. Most projects that consist of several sub-projects (what we now call modules) prefer separate root directories, where each contains a single module’s sources, tests, resources, and everything else mentioned earlier. They use a hierarchy module/concern and this is what established project structures provide.
The default directory structure, implicitly understood by tools like Maven and Gradle, implement that hierarchy. First and foremost, they give each module its own directory tree. In that tree the src directory contains production code and resources (in main/java and main/resources, respectively) as well as test code and resources (in test/java and test/resources, respectively):
ServiceMonitor
+ monitor
– monitor.observer
– src
– main
– java
– monitor
– observer
DiagnosticDataPoint.java
ServiceObserver.java
module–info.java
+ resources
+ test
+ java
+ resources
+ target
+ monitor.observer.alpha
+ monitor.observer.beta
+ monitor.persistence
+ monitor.rest
+ monitor.statistics
|
I will organize the ServiceMonitor almost like that, with the only difference that I will create the bytecode in a directory classes and JARS in a directory mods, which are both right below ServiceMonitor, because that makes the scripts shorter and more readable.
Let’s now see what those declarations infos have to contain and how we can compile and run the application.
Declaring Modules
We’ve already covered how modules are declared using module–info.java, so there’s no need to go into details. Once you’ve figured out how modules need to depend on one another (your build tool should know that; otherwise ask JDeps), you can put in requires directives and the necessary exports emerge naturally from imports across module boundaries.
module monitor.observer {
exports monitor.observer;
}
module monitor.observer.alpha {
requires monitor.observer;
exports monitor.observer.alpha;
}
module monitor.observer.beta {
requires monitor.observer;
exports monitor.observer.beta;
}
module monitor.statistics {
requires monitor.observer;
exports monitor.statistics;
}
module monitor.persistence {
requires monitor.statistics;
exports monitor.persistence;
}
module monitor.rest {
requires spark.core;
requires monitor.statistics;
exports monitor.rest;
}
module monitor {
requires monitor.observer;
requires monitor.observer.alpha;
requires monitor.observer.beta;
requires monitor.statistics;
requires monitor.persistence;
requires monitor.rest;
}
|
By the way, you can use JDeps to create an initial set of module declarations. Whether created automatically or manually, in a real-life project you should verify whether your dependencies and APIs are as you want them to be. It is likely that over time, some quick fixes introduced relationships that you’d rather get rid of. Do that now or create some backlog issues.
Compiling, Packaging, And Running
Very similar to before when it was only a single module, but more often:
$ javac
–d classes/monitor.observer
${source–files}
$ jar —create
—file mods/monitor.observer.jar
–C classes/monitor.observer .
# monitor.observer.alpha depends on monitor.observer,
# so we place ‘mods’, which contains monitor.observer.jar,
# on the module path
$ javac
—module–path mods
–d classes/monitor.observer.alpha
${source–files}
$ jar —create
—file mods/monitor.observer.alpha.jar
–C classes/monitor.observer.alpha .
# more of the same … until we come to monitor,
# which once again defines a main class
$ javac
—module–path mods
–d classes/monitor
${source–files}
$ jar —create
—file mods/monitor.jar
—main–class monitor.Main
–C classes/monitor .
|
Congratulations, you’ve got the basics covered! You now know how to organize, declare, compile, package, and launch modules and understand what role the module path, the readability graph, and modular JARs play.
On The Horizon
If you weren’t so damn curious this post could be over now, but instead I’m going to show you a few of the more advanced features, so you know what to read about next.
Implied Readability
The ServiceMonitor module monitor.observer.alpha describes itself as follows:
module monitor.observer.alpha {
requires monitor.observer;
exports monitor.observer.alpha;
}
|
Instead it should actually do this:
module monitor.observer.alpha {
requires transitive monitor.observer;
exports monitor.observer.alpha;
}
|
Spot the transitive in there? It makes sure that any module reading monitor.observer.alpha also reads monitor.observer. Why would you do that? Here’s a method from alpha‘s public API:
public static Optional<ServiceObserver> createIfAlphaService(String service) {
// …
}
|
It returns an Optional<ServiceObserver>, but ServiceObserver comes from the monitor.observer module – that means every module that wants to call alpha‘s createIfAlphaService needs to read monitor.observer as well or such code won’t compile. That’s pretty inconvenient, so modules like alpha that use another module’s type in their own public API should generally require that module with the transitive modifier.
There are more uses for implied readability.
Optional Dependencies
This is quite straight-forward: If you want to compile against a module’s types, but don’t want to force its presence at runtime you can mark your dependency as being optional with the static modifier:
module monitor {
requires monitor.observer;
requires static monitor.observer.alpha;
requires static monitor.observer.beta;
requires monitor.statistics;
requires monitor.persistence;
requires static monitor.rest;
}
|
In this case monitor seems to be ok with the alpha and beta observer implementations possibly being absent and it looks like the REST endpoint is optional, too.
There are a few things to consider when coding against optional dependencies.
Qualified Exports
Regular exports have you make the decision whether a package’s public types are accessible only within the same module or to all modules. Sometimes you need something in between, though. If you’re shipping a bunch of modules, you might end up in the situation, where you’d like to share code between those modules but not outside of it. Qualified exports to the rescue!
module monitor.util {
exports monitor.util to monitor, monitor.statistics;
}
|
This way only monitor and monitor.statistics can access the monitor.util package.
Open Packages And Modules
I said earlier that reflection’s superpowers were revoked – it now has to play by the same rules as regular access. Reflection still has a special place in Java’s ecosystem, though, as it enables frameworks like Hibernate, Spring and so many others.
The bridge between those two poles are open packages and modules:
module monitor.persistence {
opens monitor.persistence.dtos;
}
// or even
open module monitor.persistence.dtos { }
|
An open package is inaccessible at compile time (so you can’t write code against its types), but accessible at run time (so reflection works). More than just being accessible, it allows reflective access to non-public types and members (this is called deem reflection). Open packages can be qualified just like exports and open modules simply open all their packages.
Services
Instead of having the main module monitor depend on monitor.observer.alpha and monitor.observer.beta, so it can create instances of AlphaServiceObserver and BetaServiceObserver, it could let the module system make that connection:
module monitor {
requires monitor.observer;
// monitor wants to use a service
uses monitor.observer.ServiceObserverFactory;
requires monitor.statistics;
requires monitor.persistence;
requires monitor.rest;
}
module monitor.observer.alpha {
requires monitor.observer;
// alpha provides a service implementation
provides monitor.observer.ServiceObserverFactory
with monitor.observer.alpha.AlphaServiceObserverFactory;
}
module monitor.observer.beta {
requires monitor.observer;
// beta provides a service implementation
provides monitor.observer.ServiceObserverFactory
with monitor.observer.beta.BetaServiceObserverFactory;
}
|
This way, monitor can do the following to get an instance of each provided observer factory:
List<ServiceObserverFactory> observerFactories = ServiceLoader
.load(ServiceObserverFactory.class).stream()
.map(Provider::get)
.collect(toList());
|
It uses the ServiceLoader API, which exists since Java 6, to inform the module system that it needs all implementations of ServiceObserverFactory. The JPMS will then track down all modules in the readability graph that provide that service, create an instance of each and return them.
There are two particularly interesting consequences:
- the module consuming the service does not have to require the modules providing it
- the application can be configured by selecting which modules are placed on the module path
Services are a wonderful way to decouple modules and its awesome that the module system gives this mostly ignored concept a second life and puts it into a prominent place.
Reflection
Ok, we’re really done now and you’ve learned a lot. Quick recap:
- a module is a run-time concept created from a modular JAR
- a modular JAR is like any old plain JAR, except that it contains a module descriptor module–info.class, which is compiled from a module declaration module–info.java
- the module declaration gives a module its name, defines its dependencies (with requires, requires static, and requires transitive) and API (with exports and exports to), enables reflective access (with open and opens to) and declares use or provision of services
- modules are placed on the module path where the JPMS finds them during module resolution, which is the phase that processes descriptors and results in a readability graph
If you want to learn more about the module system, read the posts I linked above, check the JPMS tag, or get my book The Java Module System (Manning). Also, be aware that migrating to Java 9 can be challenging – check my migration guide for details.
Aumentare la visibilita del sito
Ti stai chiedendo come aumentare visibilità sito sui motori di ricerca ma non sai come fare, oppure sei alla ricerca di strategie nuove ed efficaci? Tutto quello che devi fare è dedicare 5 minuti del tuo tempo alla lettura di questa pagina.
Quelle che ti descriverò di seguito sono alcune efficaci strategie che se attuate congiuntamente ti permetteranno di aumentare il tuo posizionamento online in modo significativo.
Ricorda tuttavia che aumentare visibilità sito si può, ma ciò è legato a svariati fattori variabili nel tempo. La definizione e implementazione del giusto mix di strategie da adottare è quindi il risultato di una costante pratica ed esperienza.
Indice dei contenuti
Aumentare visibilità sito: SEO/SEM
Aumentare visibilità sito è un’attività legata intimamente a due discipline: il search engine optimization (SEO) e il search engine marketing (SEM). L’attuazione efficace di tecniche e metodologie SEO/SEM permette infatti di: verificare posizionamento sito; aumentare traffico sito e aumentare le visite al sito web.
Attraverso l’adozione di strategie SEO sarà possibile migliorare il posizionamento sui motori di ricerca e quindi aumentare visibilità sito su google gratis, mentre con il SEM l’aumento di visibilità immediata è legata all’acquisto di pubblicità a pagamento o all’adesione a campagne PPC (pay-per-click). Il SEM è quindi indicato per una strategia di breve termine mentre il SEO di medio-lungo termine.
Il Seo ed il Sem si basano interamente su delle singole parole chiave (keyword) o un gruppo di parole (long tail), inserite nei motori di ricerca o search engine. Ricercare e capire quali keywords gli utenti utilizzano per raggiungere un sito keyword research è fondamentale per attuare strategie SEO o campagne SEM finalizzate all’incremento della visibilità del sito stesso.
Strategie SEO
Google è il principale motore di ricerca e sono circa 200 i fattori di ranking che i suoi algoritmi valutano al fine di stabilire una gerarchia tra pagine per una data ricerca. Gli algoritmi vengono però costantemente aggiornati (Panda, Penguin, Pigeon, Hummingbird, Mobilegeddon, Fred, RankBrain). Escludendo dal prendere in considerazione attività di blackhat seo (acquisto di link, link stuffing, ecc.) per aumentare visite sito, si riportano alcuni fattori di seo on-page e seo off-page utili al posizionamento seo sui motori di ricerca:
- Tag Title: Google da molto peso alle parole poste all’inizio del Tag Title (H1), per cui includi strategicamente la tua parola chiave o focus keyword all’inizio del Titolo della tua pagina/articolo, ma anche negli H2 e H3 (sottotitoli).
- Url seo friendly: usa degli URL brevi e semplici da ricordare che includano la tua focus keyword (es. https://www.miosito.it/aumentare visibilità sito)
- Uso multimedia: usa differenti formati di multimedia (foto, video, infografiche, ecc.) nei post che pubblichi facendo però attenzione a non “appesantire” la pagina/sito
- Link esterni: includi almeno due link esterni a siti la cui autorità è riconosciuta (siti istituzionali, blog popolari, siti d’informazione o culturali governativi, ecc.)
- Velocità sito web: la velocità di caricamento della tua pagina o sito web non è un cruciale fattore di ranking ma ha una sua valenza. Occorrerà perciò ottimizzare anche: immagini, CSS, Java script, flash è ecc. In rete vi sono svariati tool per monitore la velocità tra i quali segnaliamo GTMetrix, Google Page speed insight e YSlow
- Bottoni social: assicurati che nel tuo sito siano presenti dei bottoni di condivisione (share buttons) relativi ai principali social media
- Link interni: ricordati di aggiungere al nuovo post da pubblicare 2-3 link interni riferiti ad articoli/pagine precedentemente pubblicati
- Ottimizzazione immagini: inserire la parola chiave nel tag alt text dell’immagine ti permetterà di aumentare visibilità sito e traffico dal motore di ricerca immagini di Google
- Link building o Link earning: questa strategia consiste nella creazione di backlinks, che non sono altro che link di qualità verso il tuo sito provenienti da altri siti Web. Questi vengono considerati da Google come un indice di qualità dei tuoi contenuti. Ricevere molti backlinks attribuisce quindi “autorevolezza” e “popolarità” al tuo sito favorendone indicizzazione. Ci sono essenzialmente due modi per generare backlinks:
– attuare un’efficace campagna di pubbliche relazioni digitali o Digital Pr al fine di ottenere da altri proprietari di blog/siti web e
social media influencer la creazione di link verso i contenuti del tuo sito
– creare dei contenuti rilevanti che riescono a generare curiosità, interesse e favoriscano la creazione naturale di link verso il tuo
sito (link baiting)
Come vedi, migliorare sito internet, attraverso il SEO è un argomento ampio ed in continuo aggiornamento, per approfondimenti si consiglia pertanto di leggere il nostro articolo sul tema.
Strategie SEM e Display Advertising
Il SEM è il processo di guadagnare traffico e visibilità sui motori di ricerca attraverso azioni a pagamento, come ad esempio l’acquisto di link (poco ben visto da Google), l’acquisto di spazi pubblicitari su siti rilevanti, o l’adesione a campagne di Pay per Click come Google Adwords. Per gli annunci di tipo search, infatti Google mette a disposizione le prime 4 e le ultime 3 posizioni in ogni pagina della SERP. In questo modo Google garantire agli inserzionisti di aumentare visibilità sito in modo immediato ma non in modo gratuito.
Il sistema su cui si basa Adwords è un sistema ad asta: l’inserzionista fa un offerta per la keyword o una keyphrase per cui vuole che esca il suo annuncio. In base alle offerte ricevute per una keyword è assegnato a ciascun annuncio un punteggio di qualità. Adwords bandisce l’asta in tempo reale.
Il metodo di pagamento per l’inserzionista varia a seconda del tipo di annuncio: in metodo più diffuso è quello del cost per click (CPC), ossia l’inserzionista paga ogni volta che riceve un click sul suo annuncio.
Per approfondire scarica la miniguida SEM!
Migliorare sito internet: crea e/o invia una Sitemap
Una Sitemap è una lista ordinata di tutte le pagine di un sito che si vuole vengono indicizzate da un motore di ricerca. Per aumentare visibilità sito è sicuramente un’ottima cosa aiutare i bot/spider/robots/crawler dei motori di ricerca a trovare e capire di cosa trattano tutte le pagine del tuo sito Web. L’invio di una sitemap è particolarmente importante se:
- Il tuo sito a contenuti dinamici
- il tuo sito a pagine che non sono facili da trovare dai bots (per esempio pagine con Rich AJAX o immagini)
- il tuo sito contiene delle pagine con dei link che non “lincano” correttamente ad un’altra pagina.
Puoi creare la sitemap del tuo sito manualmente tramite RSS feed o tramite strumenti come Google search console (come creare ed inviare una sitemap con search console)
Aumentare visibilità sito gratis: crea contenuti di qualità, unici e facili da condividere
Scrivere dei contenuti di qualità, unici e rilevanti per un business è (come abbiamo visto in precedenza) molto importante sia per guadagnare dei backlinks sia per attribuire reputazione, autorevolezza e popolarità al nostro sito. Creare dei contenuti facili da condividere sulle piattaforme social attraverso i social buttons non contribuisce direttamente ad aumentare visibilità sito .
Tuttavia lo fa indirettamente poiché la condivisione dei contenuti sui social aumenta la possibilità di ricevere dei backlinks, che sappiamo essere un fattore che favorisce il ranking di un sito web.
La scrittura sul Web per essere efficace deve però seguire delle precise regole che riguardano forma, stile e contenuto, che ci vengono fornite dal SEO copywriting. A queste vanno associate anche delle tecniche di persuasione applicate al marketing, che provengono da una nuova area di studi: il neuromarketing.
Posizionamento online: individua ed elimina problemi ed errori
Un sito Web, specie se di grosse dimensioni, potrebbe essere sottoposto a errori e piccoli problemi di vario genere. Questi accumulandosi nel tempo possono iniziare a causare dei problemi tanto gravi da ridurre anziché aumentare visibilità sito. Attraverso strumenti come SEMRush o Raven Tools è possibile sanare questa condizione riconducibile solitamente a:
- Duplicazione o assenza del tag title
- link rotti (Broken links)
- Immagini prive del testo alternativo o alt text
- Pagine bloccate dai robots.text
- Redirect di tipo 302 che dovrebbero essere di tipo 301 (scopri come fare un redirect)
Scopri come diventare un SEO Specialist
Migliorare ricerca Google: imposta strumenti di web analytics
Un’altra efficace strategia da adottare per aumentare visibilità sito è quella che prevede l’adozione di tecnologie e metodologie di Web Analytics. Queste permettono di monitorare tutte le attività di SEO/SEM e di web marketing implementate permettendo di raccogliere un enorme numero di dati dalla cui elaborazione è possibile ricavare preziose informazioni su come ottimizzare le performance del sito web, dell’app o della piattaforma social che si sta gestendo. Gran parte dei processi di web analytics prevedono e si sviluppano in quattro fasi essenziali:
- Raccolta dei dati online
- L’analisi delle informazione raccolte attraverso opportune metriche
- L’individuazione dei KPI
- La creazione di una strategia on-line
Lo strumento più utilizzato per la web analytics è Google Analytics. Si tratta di una piattaforma gratuita (anche se esiste una versione a pagamento) la cui installazione consiste in un piccolo frammento di codice Javascript che deve essere collocato all’interno del tuo sito web. Altri strumenti molto noti sono Alexa ed i vari Twitter analytics e Facebook insights.
Se ti piace fare digital marketing e vorresti trasformare la tua passione in un lavoro ti consiglio di dare uno sguardo alla guida su come cercare aziende che assumono nel marketing digitale.
Aumentare visibilità sito: verifica l’andamento nel tempo
Un’altra strategia da attuare al fine di aumentare la visibilità del sito è quella di collegare quest’ultimo a Google Search Console. Si tratta di un servizio gratuito offerto da Google che oltre a creare ed inviare la sitemap di un sito permette di:
- Controllare importanti backlinks al tuo sito
- verificare che Google non stia sperimentando dei problemi di indicizzazione con il tuo sito
- rilevare quali sono gli intenti di ricerca (query) che portano traffico verso il tuo sito
- verificare nel tempo il posizionamento del sito web sui motori di ricerca per vedere se vi è la necessità di attuare qualche accorgimento in più per migliorare il posizionamento online.
Oltre a search console potrai avvalerti anche di altri numerosi strumenti o tool e software per aumentare visite al sito, tra questi Trafficwave, Auto traffic generator e traffic programmer.
Visibilità su Google gratis: registrazione di un’attività a Google my business
Se il tuo sito Web ha finalità di business un modo per aumentare le visite al sito Web stesso è quello di registrare la tua attività su Google my business. Successore di Google Places, la registrazione a questa piattaforma permette di aumentare visibilità sito nei risultati di ricerca localizzati. Alla registrazione segue l’invio presso la tua attività di una lettera contenente un pin.
Questo invio consente a Google di verificare la collocazione geografica della tua attività. Dalla verificazione si ha così un’alta possibilità di apparire nei risultati di ricerca (e su Google Maps) di persone che stanno cercando un’attività come la tua nell’area in cui operi.
Conclusioni
Aumentare visibilità sito è un’attività complessa. Se hai necessità per il tuo business ma non vuoi farti assistere in questo processo da un professionista, valuta una soluzione molto più efficace e meno onerosa. Partecipa tu stesso ad uno dei corsi o master in aula o online proposti da Digital Coach per diventare:
Sponsor
How to create and manage services in CentOS 7 with systemd
Directory | Description |
---|---|
/usr/lib/systemd/system/ | Unit files distributed with installed packages. Do not modify unit files in this location. |
/run/systemd/system/ | Unit files that are dynamically created at runtime. Changes in this directory are lost when rebooted. |
/etc/systemd/system/ | Unit files created by systemctl enable and custome unit files created by system administrators. |
Any custom unit files that you create should be placed in the /etc/system/system/ directory. This directory takes precedence over other directories.
Unit files names are of the form
unit_name.unit_type
Unit_type
can be one of the following:
Unit Type | Description |
---|---|
device | A device unit. |
service | A system service. |
socket | A socket for inter-process communication. |
swap | A swap file or device. |
target | A group of units. |
timer | A systemd timer. |
snapshot | A snapshot of systemd manager. |
mount | A mount point. |
slice | A group of unit that manage the system processes. |
path | A file or directory. |
automount | A automount point. |
scope | An externally created process. |
Creating a new service (systemd unit)
To create a custom service to be managed by systemd
, you create a unit file that defines the configuration of that service. To create a service named MyService
for example, you create a file named MyService.service
in /etc/systemd/system/
# vim /etc/systemd/system/MyService.service
The unit file of service consists of a set of directives that are organized in to three sections – Unit, Service and Install. Below is an example of a very simple unit file.
[Unit] Description=Service description [Service] ExecStart=path_to_executable [Install] WantedBy=default.target
Once you have created the unit file with all the necessary configuration options, save the file and set the correct file permissions.
# chmod 664 /etc/systemd/system/MyService.service
The next step is to reload all unit files to make systemd know about the new service.
# systemctl daemon-reload
Finally to start the service, run
# systemctl start MyService.service
# systemctl enable MyService.service
to enable the service to start at boot
systemctl reboot
Reboot the host to verify whether the scripts are starting as expected during system boot.
[Unit] Section
The following are the main directives that you specify in the [Unit] section.
Description | A short description of the unit. |
Documentation | A list of URIs pointing to the documentation for the unit. |
Requires | A list of units that must be started alongside the current unit. If the any these units fail to start then current unit will not be activated. |
Wants | Similar to the Requires directive but the difference is the current unit will be activated even if the depended units fail to start. |
Before | List of units that cannot be started before the current unit. |
After | The current unit can started only after the units listed here. |
Conflicts | List units that cannot be run concurrently with the current unit. |
[Service] Section
Some of the common directives that you’ll see in service section are.
Type | Defines the startup type of the unit which can be one of the values:
|
ExecStart | Specifies the command to the executed to start service. |
ExecStartPre | Specifies the command to be executed before the main process specified in the ExecStart is started. |
ExecStartPost | Specifies the command to be executed after the main process specified in the ExecStart has finished. |
ExecStop | Specifies the command to be executed when the service is stopped. |
ExecReload | Specifies the command to be executed when the service is restarted. |
Restart | Specifies when to restart the service automatically. Possible values are “always”, “on-success”, “on-failure”, “on-abnormal”, “on-abort”, or “on-watchdog”. |
[Install] Section
The [install] section provides information required to enable or disable the units using the systemctl
command. The common options are:
RequiredBy | A list of units that requires unit. A symbolic link of this unit is created in the .requires directory of the listed unit. |
WantedBy | Specifies a list of targets under which the service should be started. A symbolic link of this unit is created in the .wants directory of the listed target. |
Using systemctl to manage services
systemctl is the command line tool you can use to control and manage services in systemd. Let’s now take a look at the some of the important systemctl commands for service management.
Listing Service Units and Unit files
To list all the units that are loaded
# systemctl list-units
To list only units of type service
# systemctl list-units -t service
To list all installed unit files of type service
# systemctl list-unit-files -t service
To list all installed unit files of type service
# systemctl list-unit-files -t service
You can use the --state
option to filter the output by the state of the unit. The following command lists all services that are enabled.
# systemctl list-unit-files --state enabled
Note the difference between list-units and list-unit-files is that list-unit will only show units that are loaded while list-unit-files shows all unit files that are installed on the system.
Start and Stop service
This is quite straightforward, start option to start a service and stop option to stop a service
# systemctl start service_name.service
# systemctl stop service_name.service
Restart and Reload services
The restart option will restart a service that is running. If the service is not running, it will be started.
# systemctl restart service_name.service
If you want to restart the service only if its running then use the try-restart option.
# systemctl try-restart service_name.service
The reload option will try to reload the service specific configuration of a unit if it is supported.
# systemctl reload service_name.service
Enable and Disable services
Units can be enabled or disabled using the enable or disable options of systemctl command. When a unit a enabled symbolic links are created in various locations as specified in the [install] section of the unit file. Disabling a unit will remove the symbolic links that wer created when the unit was enabled.
# systemctl enable service_name.service
# systemctl disable service_name.service
Reload Unit Files
Whenever you make any changes to the unit files you need to let systemd know by executing daemon-reload which reloads all unit files.
# systemctl daemon-reload
Modifying system services
The unit files that come with installed packages are stored in /usr/lib/systemd/system/
. The unit files in this directory should not be modified directly as the changes will be lost when if you update the package. The recommended method is to first copy the unit file to /etc/systemd/system/
and make the changes in that location. The unit files in /etc/systemd/system/
takes precedence over unit files in /usr/lib/systemd/system/
so the original unit file will be overridden.