With my recent graphics card upgrade I’ve had to face the fact that modern graphics card no longer support DSub VGA connectors. Since I didn’t want to replace my 1080p LED display just yet I needed a converter. As my graphics card has both DisplayPort and HDMI conector support I bought converters for both. In the computer shop where I bought these I was told to expect problems.
I’ll describe my experiences here
CableExpert (Gembird) passive HDMI-VGA converter
This one costs about 8 EUR. Display quality was perfect I saw no problems with it. However when Windows wanted to put the display into sleep mode the display didn’t go to sleep, it just lost the signal instead so it kept searching for it, displaying the error message, keeping the panel on.
Equip active HDMI-VGA converter
This one costs about 22 EUR. This one requires an outside power source, and it has a micro-USB slot for it. Display quality is perfect here too. The converter also provides a jack audio out in my case I didn’t need this. Windows could put the display to sleep just fine with this one. However after 2 days it just seemingly died. Only disconnecting and reconnecting everything could get it to work again.
CableExpert (Gembird) passive DisplayPort-VGA converter
This one costs about 8 EUR. When I first connected the converter the display was flickering and had strange black lines (the exact same problem that I was told about when buying), so I thought it was not going to work. After reconnecting everything however it started working. Display quality is perfect, it requires no external power source, and Windows can put the display to sleep with this one just fine. I’ve been using this converter for days now without a problem.
If you can’t or won’t upgrade your display you need a converter that works for you, so you should try more than one that fits your graphics card and display.
During the year I’ve upgraded my computer with new motherboard, CPU, RAM and graphics card.
I wanted to compare the performance of the old and the new parts so I’ve done some benchmarking and now I am sharing the results here
- Asus P8B75-V motherboard
- Intel Pentium G840 2.8 GHz CPU (2 core, 2 threads)
- Kingston DDR3 KHX1333C9D3B1K2/8G 2x4GB 1333Mhz RAM
- Sapphire Radeon HD 6670 1GB PCIe graphics card.
- Windows 7
- Asus PRIME 540 PLUS motherboard
- AMD Ryzen 5 2600 3.4GHz CPU (6 cores, 12 threads)
- Kingston DDR4 XMP HX430C15PB3/16 2x16GB 3000MHz RAM
- Gigabyte Geforce GTX 1650 ITX OC 4GB PCIe graphics card.
- Windows 10
I upgraded in two phases: first I upgraded the CPU and with it I had to upgrade the motherboard and RAM too. Then some days ago I upgraded the graphics card. I’ve benchmarked in both phases so I could compare the starting system with the end result and also compare the results of the old and new platform with the old graphics card.
I needed tools that can benchmark both the old and the new system. The old graphics card cannot run DirectX 12 so I had to use a DirectX 11 (or older) benchmark. I also wanted to benchmark both the system’s general performance and graphics performance.
I settled for the following tools
- 3DMark 11 performance
Old platform vs new platform, same graphics card
Surprisingly in the DX10 MRender and Splatting tests the same graphics card with a much older, and lower class CPU seems to perform better. The difference is around 5% so I guess this could be a measurement error.
Old platform + old graphics card Vs. new platform + new graphics card
Here everything performs as expected. The new platform + new graphics card outperforms the old system.
Old platform vs new platform, same graphics card
Here the physics test which seems to be CPU based shows that the newer platform outperforms the older one a lot. However the graphics performance seems to be about the same, only some percent difference, which again could be measurement error.
Old platform + old graphics card Vs. new platform + new graphics card
The physics test here however shows that the older platform outperforms the newer one by 7%. Which is something I cannot explain as of yet. This is worth some investigating.
Well, no surprise (setting aside the anomalies mentioned) that the newer, more advanced system is much faster. The higher class CPU performs faster, the DDR4 RAM outperforms the DDR3 one, altough latency is higher. The much newer and higher class GPU outperforms the older, lower class one.
Wow! I’ve just got an email from Atlassian that they are removing Mercurial support by June next year and will be focusing on Git exclusively:
After much consideration, we’ve decided to remove Mercurial support from Bitbucket Cloud and the API. Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.
What used to be a very fragmented version control software market has rapidly matured. Mercurial usage on Bitbucket is steadily declining, and the percentage of new Bitbucket users choosing Mercurial has fallen to less than 1%. At the same time, Git has become the standard. According to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption.
This is sad as I really liked that they hosted Mercurial, as it combined the ease of use of Subversion with the distributed nature of Git. Not to mention they also allowed private repos for free. Which is quite nice for your hobby projects that you don’t want to share (or not yet) with others.
At work we use Vagrant to rapidly bring up virtual development environments. One of my coworkers was doing just that, bringing up an Ubuntu Linux based development environment, but unfortunately provisioning kept failing while APT was trying to install packages. The error was always about it being unable to write and spamming “Read-only filesystem”.
Upon further investigation we found this in the kernel log:
ubuntu-xenial login: [ 623.231365] sd 2:0:0:0: [sda] tag#1 Medium access timeout failure. Offlining disk!
[ 638.283816] blk_update_request: I/O error, dev sda, sector 5144280
[ 638.447264] Buffer I/O error on device sda1, logical block 642779
[ 638.454043] Buffer I/O error on device sda1, logical block 642780
[ 638.463154] Buffer I/O error on device sda1, logical block 642781
[ 638.535287] Buffer I/O error on device sda1, logical block 642782
[ 638.537065] Buffer I/O error on device sda1, logical block 642783
[ 638.538800] Buffer I/O error on device sda1, logical block 642784
[ 638.541206] Buffer I/O error on device sda1, logical block 642785
[ 638.542957] Buffer I/O error on device sda1, logical block 642786
[ 638.544717] Buffer I/O error on device sda1, logical block 642787
[ 638.546457] Buffer I/O error on device sda1, logical block 642788
At first we suspected a disk error, but disk checking revealed nothing. So we kept digging, and it turned out that said co-worker’s PC ran out of free RAM while the development environment was being provisioned. After this we made some adjustments and as expected not running out of RAM solved the problem.
So the entire issue probably came up because the PC was swapping and that made I/O too slow, which made the kernel offline the disk, which then ended up causing that read-only filesystem error.
At work we have some web based applications (SolR, REPOX for example) that use Jetty as a servlet container (they come with it prepackaged). While I was looking up Jetty on Google I found something interesting, and useful:
Apparently there’s a Jetty plugin for Maven that can serve your web application on your local development computer without having to deploy it to a remote servlet container. 
This allows you to have a quite rapid code/build/test cycle.
All you have to do is add the Jetty plugin to the plugins section of your Maven pom.xml:
Then you can run the plugin using Maven:
This will start up Jetty, and serve your servlet on localhost:8080
You can terminate it with Ctrl+C.
It is common knowledge that one can login to a vagrant created virtual environment with the command ‘vagrant ssh’. If everything goes well due to the SSH key based authentication it is simple as that. However on Windows 10 if the directory where we’re trying this is not in the user’s home directory (on another drive in my case) then instead of logging in we’re presented with a password prompt. This happens because Vagrant uses the ssh client that is on the path, and since Windows 10 has it’s own ssh client, it will use that, and it will refuse to use a private key file that is unprotected.
Solution: Use git bash while dealing with Vagrant, because git bash’s ssh doesn’t care about the keyfile being unprotected.
Note: While usually being careless like this would pose a security risk (anyone can read the keyfile and log in), as long as we’re dealing with a simple development, or testing environment where there are no security concerns we should be fine.
I’ve just installed Windows 10 (version 1903) and I’ve also installed Vagrant (2.2.5) and VirtualBox (5.2.32) on my brand new AMD Ryzen 5 2600 based platform, as I use those two to bring up a development environment quickly.
Sadly when the VM came up and Vagrant started provisioning it (in fact it was updating apt sources of the Ubuntu Xenial guest) suddenly Windows died and threw me a nice blue screen of death (SYSTEM_SERVICE_EXCEPTION). I tried an earlier version (5.2.30) but that threw me another blue screen of death (KMODE_EXCEPTION_NOT_HANDLED).
Then I started googling and found that certain antivirus products (Avast and AVG are the ones often menntioned) contain technologies that aim to protect you from malware inside virtual environments. and they tend to cause this issue, and some posters even claimed that enabling the “Use nested virtualization where available” option would help. 
Sadly it didn’t work out so well for me, since I’ve already had that enabled by default. So I just disabled the “Hardware assisted virtualization” option altogether, and voilá that solved the problem.
Sadly I see it more and more often. I know many people, mostly recruiters, would beg to differ, but DevOps is still not a job title.
It’s a culture of cooperation, and automation within your organization. So it’s not one person, or a group of persons. It’s your entire organization that should be doing DevOps.
At the core it’s cooperation between developers and operations people to enable your organization to work smoother, develop your services faster with less problems. So it’s not a single engineer or a group of engineers who automate things. It’s your developers and operations people working together, automating together in cooperation.
It was all popularized by John Allspaw and Paul Hammond of Flickr, with their Velocity 09 presentation “10+ Deploys Per Day: Dev and Ops Cooperation at Flickr”. See for yourself:
Today I dared update to Android Studio 3.3 and as a reward I got a big, fat exception when trying to create a new project:
java.lang.RuntimeException: Could not find a JavaToKotlinConversionProvider, even though one should be bundled with Studio
After some experimenting I figured out the solution which is fairly simple: You just need to (re)enable the Kotlin plugin.
From the main screen you just need to go to Configure -> Plugins, tick Kotlin, click OK and then restart when it is offered.
We have Nagios running on one of our dev servers at work, and despite syslog logging being set to off in it’s config file it’s been spamming syslog with worker messages which is quite annoying.
Fortunately Ubuntu uses rsyslog as it’s default syslog, which is capable of redirecting log messages based on user defined filters. So I decided to get rid of this annoying problem and created that filter. Created a file /etc/rsyslog.d/49-nagios.conf with the following contents
:syslogtag, contains, “nagios” /var/log/nagios.log
After restarting rsyslog with
sudo service rsyslog restart
The problem is now solved, now it redirects those messages into the specified log file instead. 🙂