As developers become more concerned with injecting security-awareness into their processes, tools like Sonarqube become more popular. As that popularity increases, the desire to run that software on more than just the Linux distro(s) it was originally built to run on increases.
Most of my customers only allow the use of Red Hat (and, increasingly, CentOS) on their production networks. As these customers hire developers that cut their teeth not on Red Hat, the pressure for deploying their preferred-tools onto Enterprise Linux increases.
Sonarqube has two relevant installation-methods listed on their
installation documentation. One is to explode a ZIP archive and install the files wherever desired and set up requisite automated startup as appropriate. The other is to install a "semi-official" RPM-packaging. Initially, I chose the former, but subsequently changed to the latter. While the former is fine if you're installing very infrequently (and installing by hand), it can fall down if you're doing automated deployments by way of service-frameworks like Amazon's CloudFormation (CFn) and/or AutoScaling.
My preferred installation method is to use CFn to create AutoScaling Groups (ASGs). I don't run Sonarqube clustered: the community edition that my developers lack of funding allows me to deploy doesnt support clustering. Using ASGs means that I can increase the service's availability by having the Sonarqube service simply rebuild itself if the service ever becomes unexpectedly offline. Rebuilds only take a couple minutes and generally don't require any administrator intervention.
This method is/was compatible with the ZIP-based installation method for the first month or so of running the service. Then, one night, it
stopped working. When I investigated to figure out why it stop, I found that the URL the deployment-automation was pointing to no longer existed. I'm not really sure what happened because the Sonarqube maintaiers seem to keep back versions of the software around indefinitely.
/shrug
At any rate, it caused me to return to the installation documentation. This time, I noticed that there was an RPM option. So, I went down that path. For the most part, it made things dead easy vice using the ZIP packaging. There were only a few gotchas with the RPM:
- The current packaging, in addition to being a "noarch" packaging, is neither EL version-specific nor is it cross-version. Effectively, it's EL6-oriented: it includes a legacy init script but not a systemd service definition. The RPM maintainer likely needs to either make an EL6 and an EL7 packaging or needs to update the packaging to include files for both init-types and then use a platform-selector to install and activate the appropriate one
- The current packaging creates the service account 'sonar' and installs the packaged files as user:group 'sonar' ...but doesn't include start-logic to ensure that the Sonarqube runs as 'sonar'. This will manifest in Sonarqube's es.log file similarly to:
[2018-02-21T15:45:51,714][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.cli.Command.main(Command.java:62) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.1.1.jar:5.1.1]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:100) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:176) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:306) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.1.1.jar:5.1.1]
... 6 more
- The embedded ElastiCache service requires that the file-descriptors ulimit be at a specific minimum value: if the deployment is on a hardened system, it is reasonably likely the default ulimits are too low. If too low, this will manifest in Sonarqube's es.log file similarly to:
[2018-02-21T15:47:23,787][ERROR][bootstrap ] Exception
java.lang.RuntimeException: max file descriptors [65535] for elasticsearch process likely too low, increase to at least [65536]
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:79)
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:60)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:188)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:264)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:111)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:106)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)
at org.elasticsearch.cli.Command.main(Command.java:53)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:74)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
- The embedded ElastiCache service requires that the kernel's vm.max_map_count runtime parameter be tweaked. If not adequately tweaked, this will manifest in Sonarqube's es.log file similarly to:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
- The embedded ElastiCache service's JVM won't start if the /tmp directory's had the noexec mount-option applied as part of the host's hardening. This will manifest in Sonarqube's es.log file similarly to:
WARN es[][o.e.b.Natives] cannot check if running as root because JNA is not available
WARN es[][o.e.b.Natives] cannot install system call filter because JNA is not available
WARN es[][o.e.b.Natives] cannot register console handler because JNA is not available
WARN es[][o.e.b.Natives] cannot getrlimit RLIMIT_NPROC because JNA is not available
WARN es[][o.e.b.Natives] cannot getrlimit RLIMIT_AS because JNA is not available
WARN es[][o.e.b.Natives] cannot getrlimit RLIMIT_FSIZE because JNA is not available
- External users won't be able to communicate with the service if the host-based firewall hasn't been appropriately configured.
The first three items are all addressable via a properly-written systemd service definition. The following is what I put together for my deployments:
[Unit]
Description=SonarQube
After=network.target network-online.target
Wants=network-online.target
[Service]
ExecStart=/opt/sonar/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonar/bin/linux-x86-64/sonar.sh stop
ExecReload=/opt/sonar/bin/linux-x86-64/sonar.sh restart
Group=sonar
LimitNOFILE=65536
PIDFile=/opt/sonar/bin/linux-x86-64/SonarQube.pid
Restart=always
RestartSec=30
User=sonar
Type=forking
[Install]
WantedBy=multi-user.target
I put this into GitHub project I set up as part of automating my deployments for AWS. The '
User=sonar' option causes systemd to start the process as the
sonar user. The '
Group=sonar' directive ensures that the process runs under the
sonar. The '
LimitNOFILE=65536' ensures that the service's file-descriptor's
ulimit is set sufficiently high.
The fourth item is addressed by creating an
/etc/sysctl.d/sonarqube file containing:
vm.max_map_count = 262144
I take care of this with my automation but the RPM maintainer could obviate the need to do so by including an
/etc/sysctl.d/sonarqube file as part of the RPM.
The fifth item is addressable through the
sonar.properties file. Adding a line like:
sonar.search.javaAdditionalOpts=-Djava.io.tmpdir=/var/tmp/elasticsearch
Will cause Sonarqube to point ElasticSearch's temp-directory to
/var/tmp/elasticsearch. Note that this is an example directory: pick a location in which the sonar user can create a directory and that is on a filesystem that does not have the
noexec mount-option set. Without making this change, ElasticSearch will fail to start because JNI cannot be started.
The final item is adressable by adding a port-exception to the
firewalld service. This can be done as simply as executing `
firewall-cmd --permanent --service=sonarqube --add-port=9000/tcp` or adding a
/etc/firewalld/services/sonarqube.xml file containing:
<?xml version="1.0" encoding="utf-8"?>
<service>
<short>Sonarqube service ports</short>
<description>Firewalld options supporting Sonarqube deployments</description>
<port protocol="tcp" port="9000" />
<port protocol="tcp" port="9001" />
</service>
Either is suitable for inclusion in an RPM: the former by way of a
%post script; the latter as a config-file packaged in the RPM.
Once all of the bulleted-items are accounted for, the RPM-included service should start right up and function correctly on a hardened EL7 system. Surprisingly enough, no other tweaks are typically needed to make Sonarqube run on a hardened system. Sonarqube will typically function just fine with both FIPS-mode and SELinux active.
It's likely that once Sonarqube is deployed, functionality beyond the defaults will be desired. There are quite a number of functionality-extending plugins available. Download the ones you want and install them to
<SONAR_INSTALL_ROOT>/extensions/plugins and restart the service. Most plugins behavior is configurable via the Sonarqube administration GUI.
Note: while much of the 'fixes' following the gotcha list are scattered
throughout the Sonarqube documentation, on an unhardened EL 7 system,
the system defaults are loose enough that accounting for them was not
necessary for either the ZIP- or RPM-based installation-methods.
Extra: If you want the above taken care of for you and you're trying to deploy Sonarqube onto AWS, I put together a set of template-driven CloudFormation templates and helper scripts. They can be found on
GitHub.