Bitbucket is discontinuing Mercurial support by June 2020

Wow! I’ve just got an email from Atlassian that they are removing Mercurial support by June next year and will be focusing on Git exclusively:

After much consideration, we’ve decided to remove Mercurial support from Bitbucket Cloud and the API. Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.

What used to be a very fragmented version control software market has rapidly matured. Mercurial usage on Bitbucket is steadily declining, and the percentage of new Bitbucket users choosing Mercurial has fallen to less than 1%. At the same time, Git has become the standard. According to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption.

This is sad as I really liked that they hosted Mercurial, as it combined the ease of use of Subversion with the distributed nature of Git. Not to mention they also allowed private repos for free. Which is quite nice for your hobby projects that you don’t want to share (or not yet) with others.

Read more on their site.

 

Advertisements

Vagrant “Read-only filesystem” errors while provisioning Linux guest

At work we use Vagrant to rapidly bring up virtual development environments. One of my coworkers was doing just that, bringing up an Ubuntu Linux based development environment, but unfortunately provisioning kept failing while APT was trying to install packages. The error was always about it being unable to write and spamming “Read-only filesystem”.

Upon further investigation we found this in the kernel log:

ubuntu-xenial login: [ 623.231365] sd 2:0:0:0: [sda] tag#1 Medium access timeout failure. Offlining disk!
[ 638.283816] blk_update_request: I/O error, dev sda, sector 5144280
[ 638.447264] Buffer I/O error on device sda1, logical block 642779
[ 638.454043] Buffer I/O error on device sda1, logical block 642780
[ 638.463154] Buffer I/O error on device sda1, logical block 642781
[ 638.535287] Buffer I/O error on device sda1, logical block 642782
[ 638.537065] Buffer I/O error on device sda1, logical block 642783
[ 638.538800] Buffer I/O error on device sda1, logical block 642784
[ 638.541206] Buffer I/O error on device sda1, logical block 642785
[ 638.542957] Buffer I/O error on device sda1, logical block 642786
[ 638.544717] Buffer I/O error on device sda1, logical block 642787
[ 638.546457] Buffer I/O error on device sda1, logical block 642788

At first we suspected a disk error, but disk checking revealed nothing. So we kept digging, and it turned out that said co-worker’s PC ran out of free RAM while the development environment was being provisioned. After this we made some adjustments and as expected not running out of RAM solved the problem.

So the entire issue probably came up because the PC was swapping and that made I/O too slow, which made the kernel offline the disk, which then ended up causing that read-only filesystem error.

Java web application rapid development with Maven and Jetty

At work we have some web based applications (SolR, REPOX for example) that use Jetty as a servlet container (they come with it prepackaged). While I was looking up Jetty on Google I found something interesting, and useful:

Apparently there’s a Jetty plugin for Maven that can serve your web application on your local development computer without having to deploy it to a remote servlet container. [1][2]
This allows you to have a quite rapid code/build/test cycle.

All you have to do is add the Jetty plugin to the plugins section of your Maven pom.xml:

<plugin>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-maven-plugin</artifactId>
<version>9.4.18-SNAPSHOT</version>
</plugin>

Then you can run the plugin using Maven:

mvn jetty:run

This will start up Jetty, and serve your servlet on localhost:8080

You can terminate it with Ctrl+C.

Sources:
[1] https://www.eclipse.org/jetty/documentation/9.4.x/jetty-maven-plugin.html
[2] https://books.sonatype.com/mvnex-book/reference/web-sect-configuring-jetty.html

Vagrant ssh on Windows 10 when outside your Windows home directory

It is common knowledge that one can login to a vagrant created virtual environment with the command ‘vagrant ssh’. If everything goes well due to the SSH key based authentication it is simple as that. However on Windows 10 if the directory where we’re trying this is not in the user’s home directory (on another drive in my case) then instead of logging in we’re presented with a password prompt. This happens because Vagrant uses the ssh client that is on the path, and since Windows 10 has it’s own ssh client, it will use that, and it will refuse to use a private key file that is unprotected.

Solution: Use git bash while dealing with Vagrant, because git bash’s ssh doesn’t care about the keyfile being unprotected.

Note: While usually being careless like this would pose a security risk (anyone can read the keyfile and log in), as long as we’re dealing with a simple development, or testing environment where there are no security concerns we should be fine.

AVG on Windows 10 can cause a blue screen of death with VirtualBox

I’ve just installed Windows 10 (version 1903) and I’ve also installed Vagrant (2.2.5) and VirtualBox (5.2.32) on my brand new AMD Ryzen 5 2600 based platform, as I use those two to bring up a development environment quickly.

Sadly when the VM came up and Vagrant started provisioning it (in fact it was updating apt sources of the Ubuntu Xenial guest) suddenly Windows died and threw me a nice blue screen of death (SYSTEM_SERVICE_EXCEPTION). I tried an earlier version (5.2.30) but that threw me another blue screen of death (KMODE_EXCEPTION_NOT_HANDLED).

Then I started googling and found that certain antivirus products (Avast and AVG are the ones often menntioned) contain technologies that aim to protect you from malware inside virtual environments. and they tend to cause this issue, and some posters even claimed that enabling the “Use nested virtualization where available” option would help. [1][2][3][4]

Sadly it didn’t work out so well for me, since I’ve already had that enabled by default. So I just disabled the “Hardware assisted virtualization” option altogether, and voilá that solved the problem.

Sources:
[1] https://superuser.com/questions/1463665/bsod-system-service-exception-on-win-10-when-using-vmware-or-virtual-box

[2] https://superuser.com/questions/1460590/virtualbox-ubuntu-18-04-lts-causing-bsod-amd-ryzen-7-3750h/1460657#1460657

[3] https://support.avg.com/answers?id=906b0000000DpG6AAK

[4] https://forums.virtualbox.org/viewtopic.php?f=6&t=89859

 

 

DevOps is not a job title!

Sadly I see it more and more often. I know many people, mostly recruiters, would beg to differ, but DevOps is still not a job title.

It’s a culture of cooperation, and automation within your organization. So it’s not one person, or a group of persons. It’s your entire organization that should be doing DevOps.

At the core it’s cooperation between developers and operations people to enable your organization to work smoother, develop your services faster with less problems. So it’s not a single engineer or a group of engineers who automate things. It’s your developers and operations people working together, automating together in cooperation.

It was all popularized by John Allspaw and Paul Hammond of Flickr, with their Velocity 09 presentation “10+ Deploys Per Day: Dev and Ops Cooperation at Flickr”. See for yourself:

“Could not find a JavaToKotlinConversionProvider” after upgrading to Android Studio 3.3

Today I dared update to Android Studio 3.3 and as a reward I got a big, fat exception when trying to create a new project:

java.lang.RuntimeException: Could not find a JavaToKotlinConversionProvider, even though one should be bundled with Studio

After some experimenting I figured out the solution which is fairly simple: You just need to (re)enable the Kotlin plugin.

From the main screen you just need to go to Configure -> Plugins, tick Kotlin, click OK and then restart when it is offered.

…and that’s it, you’re all set and you can now create new projects again! Happy coding! 🙂

Stop Nagios worker syslog spam on Ubuntu

We have Nagios running on one of our dev servers at work, and despite syslog logging being set to off in it’s config file it’s been spamming syslog with worker messages which is quite annoying.

Fortunately Ubuntu uses rsyslog as it’s default syslog, which is capable of redirecting log messages based on user defined filters. So I decided to get rid of this annoying problem and created that filter. Created a file /etc/rsyslog.d/49-nagios.conf with the following contents

:syslogtag, contains, “nagios” /var/log/nagios.log
& stop

After restarting rsyslog with

sudo service rsyslog restart

The problem is now solved, now it redirects those messages into the specified log file instead. 🙂

Netbeans 8.1 “No tests executed”

Recently I’ve started to learn and practise unit testing in Java using JUnit to improve my productivity and confidence in my own code. Netbeans is quite useful for tests too, because it automates many things while writing and running the tests. However today I noticed something weird: I wrote some tests and I could run them just fine by themselves from the projects widget using the context menuitem “test file”, however when I wanted to “test project” from the run menu, Netbeans said “No tests executed” in the test window. I started digging and unfortunately I couldn’t find anything useful. However I noticed that in all the examples the test classes were called XYZTest, while my test classes are were named TestXYZ. I tried to rename then and guess what? Renaming them solved the problem. So to sum my experiences up:

If you want Netbeans to find and run your test classes name them like this: XYZTest. Where XYZ can be of course an arbitrary string that is legal in a class name in Java.

SocketTimeoutException in servlet running in Tomcat 7 behind an Nginx 1.10.3 reverse proxy

We have a digital object repository called DSpace at work, and we use the SWORDv2 protocol to deposit digital object into it. DSpace GUI and it’s SWORDv2 endpoint runs as servlets in a Tomcat container, and it’s all behind Nginx acting as a reverse proxy.

The other day one of my co-workers wanted to deposit a larger digital object package ( 8 GB ) into the repository, but unfortunately it failed because the servlet kept throwing SocketTimeoutException while it was reading the data being deposited, so I had to investigate and solve the problem.

java.net.SocketTimeoutException: Read timed out
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:170)
java.net.SocketInputStream.read(SocketInputStream.java:141)
org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:535)
org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504)
org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:566)
org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:137)
org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:339)
org.apache.coyote.Request.doRead(Request.java:438)
org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290)
org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:449)
org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:315)
org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:167)
org.swordapp.server.SwordAPIEndpoint.storeAndCheckBinary(SwordAPIEndpoint.java:197)
org.swordapp.server.SwordAPIEndpoint.addDepositPropertiesFromBinary(SwordAPIEndpoint.java:388)
org.swordapp.server.CollectionAPI.post(CollectionAPI.java:160)
org.swordapp.server.servlets.CollectionServletDefault.doPost(CollectionServletDefault.java:48)
javax.servlet.http.HttpServlet.service(HttpServlet.java:650)
javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)

I read the Tomcat and DSpace logs but it revealed nothing. I noticed that DSpace had some interrupted deposits in it’s upload directory. All of the files were of size 2 GB, which was suspicious but I couldn’t figure out why at first, because I couldn’t see and find any limit that would explain why it should die at just 2 gigs.

I am not an Nginx expert, but I enabled debug logging and started reading logs. Unfortunately at first sight it didn’t reveal anything, I saw no errors, only that Tomcat returned 500 while depositing, that’s when the SocketTimeoutException was raised. However some lines caught my attention anyways.

2018/07/18 09:27:55 [debug] 4273#4273: *1 sendfile: @0 2147479552
2018/07/18 09:27:55 [debug] 4273#4273: *1 sendfile: 2147479552 of 2147479552 @0

That big integer was quite suspicous, and after doing some simple math I figured that 2147479552 twice divided by 1024 is 2048. Which means this could be a byte count. This made me start thinking. After sending this much data and some wait Tomcat sent 500 with that exception, so I figured it’s worth looking into. I started digging in Nginx’s source code and found a comment block and a constant below it:

/*
* On Linux up to 2.4.21 sendfile() (syscall #187) works with 32-bit
* offsets only, and the including <sys/sendfile.h> breaks the compiling,
* if off_t is 64 bit wide. So we use own sendfile() definition, where offset
* parameter is int32_t, and use sendfile() for the file parts below 2G only,
* see src/os/unix/ngx_linux_config.h
*
* Linux 2.4.21 has the new sendfile64() syscall #239.
*
* On Linux up to 2.6.16 sendfile() does not allow to pass the count parameter
* more than 2G-1 bytes even on 64-bit platforms: it returns EINVAL,
* so we limit it to 2G-1 bytes.
*/

#define NGX_SENDFILE_MAXSIZE 2147483647L

After some further digging I realized that this sendfile() call is the default network I/O implementation of Nginx, but it can be turned off by setting

sendfile off;

in the http scope of the Nginx config file. As I suspected this solved the problem, and we could deposit the packages without problems. Now as a short summary here’s what this is about and what happened:

sendfile() is an I/O call that transfers data between file descriptors without having to first read the data into RAM, therefore it’s faster than the traditional solution of reading from the source, storing in RAM then writing to the destination. This is by default enabled in Nginx and this is what among other solutions makes Nginx a fast web server. However it has a limit of 2 GB. So when my co-worker was depositing his package, Nginx accepted the deposit, and sent it to Tomcat. The trouble was that it wouldn’t send all data. When it finished with the 2GB part of the 8 GB size file it just stopped, while Tomcat was still waiting for the rest of the data. After a short while it timed out, and returned an HTTP code of 500 to Nginx. Turning off sendfile() fixes this, as Nginx can now send all the data, however this makes network I/O slower.