Today I gave a talk titled Fifty shades of Serverless at JavaForum Göteborg (Gothenburg), covering Serverless and the different options the market is offering us as developers in general and Java developers in particular. I also demoed a few examples and highlighted a few caveats to consider before going serverless.
Keeping your configuration separate from your code, rather than hard coding stuff, is sound advice. But in order to follow it, you need to understand what configuration is. The twelve-factor "manifesto" explains
"An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc)"
and then goes on to list examples such as connections strings and credentials to databases and other external resources/services and per-deploy things like hostname to be included in aggregated logs.
What is not configuration?
The document continues with clarifying
"Note that this definition of “config” does not include internal application config, … This type of config does not vary between deploys"
In essence: thing that vary between deployments (dev/qa/stage/production; geographic region etc) and/or instances (hostname within a deployment cluster) is configuration that should be kept separate from the version controlled code, while internal configuration identical to all deployments and instances belong inside the codebase.
In this post I’m going to argue that something else isn’t configuration from a Twelve-Factor App point of view:
Things that are expected to change over time is not configuration
Well, of course, if those things also differ per deployment, they are configuration. But just because something is determined to be more or less likely to change down the road, often for non-technical reasons, doesn’t make it configuration and warrant it to be treated as such.
Even if it is expected to first change in your development environment, then in your staging environment and then finally in your production evenment, it isn’t configuration. You know something else that has the same lifecycle expectation? Your code.
Let me take as an example for you: the monthly price of a Netflix subscription. (Please note that this is completely fictitious – I don’t know how Netflix technically treats their prices.)
When Netflix launched, they likely anticipated that their prices would change at some point in the future. And when they actually did update their prices, don’t you think that they first did this in some dev/test/qa environment before they "released" the new price to the market (i.e. production environment)?
"I need a UI"
Often these kinds of things are initiated by some business stakeholders, such as the marketing people. "You need to create a UI so that we can change X [monthly price] whenever we want to". Sometimes they also want the ability to copy settings from one environment to another, either by some export/import settings feature in said UI, or by having someone set up a routine so that they can copy database entries from one environment to the other.
This should raise a red flag. If the business people want to be able to first make a "configuration" change in a qa/stage environment and then, when they think they are ready, copy that "configuration" over to the production environment, you should take a step back and contemplate about what you are trying to achieve. Are you in essence creating two parallell deployment pipelines – one for the code and one for the config? Who is going to develop and maintain the config pipeline? Can the extra cost and complexity of having two separate deployment pipelines really be motivated…?
I would say that a litmus test for whether something is truly a business only configuration warranting a UI, is that if you imagine that you stopped all development and then propagated all code so that all environments ran the exact same codebase and all the “marketing deadlines” (such as price increases) were reached and after that you had a difference in configuration between the environments – would that be considered an error? Another way to put it: If you changed this configuration in the production environment first, would there then be a need to propagate the change "backwards" to stage/qa/dev? Or is it totally fine if the Netflix subscription costs $10.99 for actual customers, $9.99 in the stage environment and $6.03 on John Doe’s development machine?
Instead putting "configuration" that is expected to change inside your codebase assumes that you will be releasing that codebase often enough compared to how frequently (and with how much notice) the configuration is expected to change. If your next production release is scheduled in 5 months, you’d wish you had created that UI when the business people requires that you change X at the next turn of the month.
But what if you took the time that you would have spent creating the UI and export/import feature, and instead spent that time streamlining the release pipeline of your codebase? Ultimately you’d have Continuous Delivery, so that a change made in your codebase (on a hotfix branch, if needed) could reach production within hours if not minutes.
If you release relatively often, but marketing requires that a change (such as price increase) occurs at a specific day or even time on that day, and you can’t or don’t want to release on that exact day or time, you could consider including the config in your codebase using Feature toggles. Admittedly there is overhead involved with allowing you to toggle the feature during runtime, however.
Config or data?
Sometimes the thing you want to change or add is not just a simple string or number, but an entire data structure. This still doesn’t warrant a UI and/or export/import. As for database data you could use – and hopefully you are already using – some database migration tool like Flyway or Liquibase, and you can include the config change in those scripts.
Other options include storing structured data in separate files, such as XML or JSON, alongside your source code. Maybe you can even define a file format that the business people can manage themselves, and that will then be included in the codebase? (Before you suggest that however, I should warn you they are likely to suggest Excel…)
"But that requires a developer to make a business change!"
Generally that is true – as with changes to the business logic of your application. Even though you may be able to find ways around that, as per above, this means putting this kind of config in your codebase probably won’t work well for slow moving, waterfall type of organizations. But in an agile environment with Continuous Delivery or at least a high release cadence, it shouldn’t be much of an issue unless the changes required are very frequent or complex.
And remember, we had already saved ourselves development time by avoiding having to create the UI and/or the process for separately propagating configuration from one environment to another.
We’ve already mentioned the benefit of avoiding having to set up, document and maintain a separate "release pipeline" for your config. Consider the fact that it should often be managed by non-tech people and the benefits of avoiding it may be even greater.
Another main benefit if you integrate this type of configuration in your codebase, is that you can use lower level, cheaper/faster tests to verify the correctness of the settings. Hopefully you are familiar with the Test Pyramid, visualising that higher level tests (such as UI or integration tests) are both more expensive to maintain and runs slower, effectively slowing down your release pipeline and decreasing your maximum possible release cadence.
In the Netflix example, having the price inside your codebase means you can write unit tests for verifying debit calculations etc, rather than having to write for example UI tests for the same verification.
You will also get your configuration version controlled, which is a positive side effect – especially if you managed to avoid Excel. 🙂
Agree or disagree? I’d love to hear your thoughts in the comments below.
Sometimes you need/want to make more radical changes to your SQL database schema, such as renaming a column or moving data from one table to another.
Tools like Flyway and Liquibase has simplified making backwards compatible database migrations, such as adding columns/tables. However in order to make non-backwards compatible changes “online” (i.e. while the application is up and running) for a clustered environment (multiple application instances accessing the same database) requires a little more thought.
Basically you need to make this change in 5 steps, spread out across (at least) 4 releases (assuming a release means updating the database, either separately or during application boot using a migration tool, and updating application – in that order). I’ll use renaming a database column as an example.
- Database: Add the new column (i.e. do not rename or drop the previous one) to the database. You may copy existing data to new column also.
Application: Start writing the data to both the old and the new column.
- Database: Copy existing data to new column also. Even if you did that in the previous step, you need to do it again, since the application may have written to the old column only between the time when the database migration was executed, and the time the application was updated. You could opt to only copy the data that was written during that time (i.e. where the old column is non-null but the new column is null.) This does not have to be a separate release, but could be the migration made as part of the next application release.
- Application: Start reading data from the new column instead of the old column. Note that you must not stop writing data to the old column yet, since as you update one application instance (i.e. one cluster node) at a time, there can be non-updated nodes reading the old column for data written by updated nodes.
- Application: Stop writing to the old column.
Database: Note that we cannot drop the old column yet, since the non-updated nodes will still be writing to it.
- Database: Drop the old column.
During September 2016 I’ll be speaking about ClassLoader leaks on JavaZone, JDK.IO and JavaOne. For those that listened to my talk and want to read more on the subject, here are the slides, links to my blog series and to the ClassLoader Leak Prevention library on GitHub.
Recording (from JDK.IO):
I’ll be talking about agile code review on JDK.IO in Copenhagen and since last I talked on the subject more code review tools have become available, some of the from “big players”, so an updated list of tools seems to be in place.
- Gerrit (by Google) – Open Source, web-based, for Git only, used for Android
- Phabricator (originally by Facebook) – web-based, free when self hosted
- Upsource (by JetBrains) – web-based, free for up to 10 users, IntelliJ integration (of course)
- Crucible (by Atlassian) – commercial, web-based
- Collaborator (formerly Code Collaborator; by SmartBear) – web-based + Eclipse plugin + Visual Studio plugin (IntelliJ plugin under development), free for up to 10 users
- Klocwork – commercial, web-based
- ReviewBoard – Open Source, web-based
- AgileReview – Eclipse plugin
Older, possibly abandoned tools:
I recently released version 2.0.0 of the ClassLoader Leak Prevention library to Maven Central. This is a major refactoring, that provides the following new.
App server and non-servlet framework integration
The library now has a core module that does not assume a servlet environment. This means that the library can be integrated into environments that do dynamic class loading, such as scripting enginges. It also means that Java EE application servers can integrate the library, so that web apps deployed onto that server wouldn’t need to include the library to be protected from
java.lang.OutOfMemoryError: PermGen space / Metaspace. More details can be found in the module README.md on GitHub
Zero-config Servlet 3.0+ module
If you’re in a Servlet 3.0 or 3.1 environment, there is no longer a need for explicitly declaring the
web.xml. Instead use a the
classloader-leak-prevention-servlet3 Maven dependency that handles this for you automatically. For details, see the README.md on GitHub.
Preventions are now plugins
In version 1.x, you needed to subclass the librarys
ServletContextListener to add, remove or change behaviour of specific leak prevention measures. In 2.x, each prevention mechanism is a separate class implementing an interface. This makes it easier to implement your own additional preventions, remove measures from the configuration, or subclass and adjust any single mechanism.
While 1.x logged to
System.err unless you subclassed and overrode the log methods, 2.x by default uses
java.util.logging (JUL). You can also easily switch to the
System.err behaviour, or provide your own logging.
Please note that bridging JUL to other logging frameworks (for example using jul-to-slf4j has not been tested, and may produce unexpected results, in case something is logged after the logging framework has been shut down by the library.
Hopefully you have Continuous Integration set up for your Java project in a way that Jenkins/Teamcity/whatnot triggers a build as soon as you push to GitHub, or whatever VCS you may be committing to. As part of that build, or triggered by that build, you run a bunch of automated tests and then you or some other part of your team does manual testing of the produced artifacts (WAR/JAR) – right?
Then once everyone is happy with the results you want to perform a release. Assuming you’re using Maven, that means running a build in the CI system using the Maven Release Plugin. This will build the artifacts again and run the tests again. Twice even; prepare + perform. (Axel Fontaine has previously blogged about this nuisance and suggested a workaround that he calls Maven Releases on Steroids.)
In this era of agile and frequent releases, this seems like a waste of time, don’t you think? And when you have discovered that critical bug in production, having to wait another 20/40/120/whatever minutes for the release build after the fix has already been verified can be quite frustrating.
Don’t you wish there was a way to just promote the already verified artifacts, to say “These are the ones I want to release”, and have them deployed to your Maven release repository (such as Nexus)?
Well now there is!
Available in Maven Central is now the Maven Promote Plugin that allows you to “promote” your
SNAPSHOT build into a release, and have it SCM tagged and deployed the regular Maven way.
For detailed instructions how to configure your Maven
pom.xml and Jenkins, see the README for
promote-maven-plugin on GitHub.
While the initial version of the plugin may not be as feature rich and/or intelligent as it could be, it gets the job done and on my day job we have been releasing to production with this plugin for months now, which is why I’ve decided to promote it to 1.0.0 GA.
Today I launch another weapon in the ongoing war on Classloader Leaks: The
classloader-leak-test-framework. Admittedly, the framework itself is not new. The news is that in order to use it you no longer have to clone the Git repo, because it is now available as a Maven artifact through Maven Central.
If you want to confirm a suspected leak, just add
<dependency> <groupId>se.jiderhamn</groupId> <artifactId>classloader-leak-test-framework</artifactId> <version>1.0.0</version> <scope>test</scope> </dependency>
to your POM and create a test case that you believe would trigger the leak. (Make sure to check GitHub for the current
Heap dump when leak detected
Another improvement to the test framework that I have not previously announced, is that the framework can now automatically create a heap dump when a ClassLoader leak is detected. This makes it even easier to track down the cause of the leak and determine the required countermeasures. To activate this feature add
@Leaks(dumpHeapOnError = true) to your test method.
Test framework documentation
For further information on how to use the ClassLoader Leak test framework, see the projects space on GitHub.
I saw it again today – a site where they had made the effort to disable pasting your password into the login form. The motive surely must be to increase security, but this may be the stupidest, most counterproductive security measure I know of. Let me explain why.
Basic password principles
The two most basic principles when it comes to online passwords are:
- Use strong passwords
- Do not use the same password on multiple sites
One reason for using a strong password is obvious – it should not be easy to guess your password (i.e. pet’s name etc). The less obvious reason is that long and complex passwords are harder for hackers to reveal using simple techniques like dictionary attacks. If you want to know more about creating strong passwords, just google it. But I have another suggestion below.
The reason not to use the same password on multiple sites, is that in the unfortunate – although not that uncommon – event that a site gets hacked, and that they stored the passwords in clear text or weakly hashed without salt (don’t be that guy), so hackers get hold of your login info, they should not be able log into all your other accounts. Just imagine some low profile (low security?) forum you may have posted in once or twice gets hacked, and suddenly someone can control both your Google/Apple, Facebook and LinkedIn account. Not a pleasant thought, huh? (Tip: Enable two-factor authentication!)
Password managers to the rescue
The easiest way to use strong, unique passwords for all your online accounts is to use a password manager and have it generate different, strong, random password for each site. Thanks to the password manager, you can have good passwords like
ltAaxjykylfcq3yU1K9M for Site A and
8KtVtz2iKa0kEhJ6honf to Site B, without having to remember any of them. (But you will of course need to remember the – preferably – strong password to your password manager. This is where the tips for manually creating strong passwords come in handy!)
I personally use and highly recommend KeePass which is free and available on multiple platforms (so you can access your passwords on both your PC and your smartphone). In my KeePass file I have 400+ passwords, most of them with a complexity like the examples above. Even though my memory often serves me well, there is just no way I could ever remember 400+ passwords as strong as
Counter productive paste disablement
Back to the problem with disabling pasting of passwords into the login form. The most straighforward way to use KeePass, is to open your safe file and then just copy/paste your password into the login form. You won’t even see the password, as KeePass will by default mask it. The problem with sites that have disabled pasting into the password field, is that they discourage the use of password managers. Admittedly there are other ways to use password managers, such as browser integration and drag-and-drop, but the average user probably won’t bother to set that up. So, if I can’t copy/paste
ltAaxjykylfcq3yU1K9M from KeePass (and don’t know there are other options), which do you think is the most likely scenario: that I unmask the password in KeePass (which by the way could allow someone reading it over your shoulder) and type it in manually – or that I choose a password that is easier to remember and type, say maybe one of the most popular passwords in the world…? And do you think that it is more or less likely that the user will reuse the same password on multiple “paste disabled sites”, than “paste enable sites”? So by discouraging the use of password managers, do you agree these sites implicitly discourage the two basic principles for online passwords – strength and uniqueness?
If you are a developer, please don’t disable paste in your login form.
By all means, read Troy Hunt on the same subject.