We have Nagios running on one of our dev servers at work, and despite syslog logging being set to off in it’s config file it’s been spamming syslog with worker messages which is quite annoying.
Fortunately Ubuntu uses rsyslog as it’s default syslog, which is capable of redirecting log messages based on user defined filters. So I decided to get rid of this annoying problem and created that filter. Created a file /etc/rsyslog.d/49-nagios.conf with the following contents
:syslogtag, contains, “nagios” /var/log/nagios.log
After restarting rsyslog with
sudo service rsyslog restart
The problem is now solved, now it redirects those messages into the specified log file instead. 🙂
We have a digital object repository called DSpace at work, and we use the SWORDv2 protocol to deposit digital object into it. DSpace GUI and it’s SWORDv2 endpoint runs as servlets in a Tomcat container, and it’s all behind Nginx acting as a reverse proxy.
The other day one of my co-workers wanted to deposit a larger digital object package ( 8 GB ) into the repository, but unfortunately it failed because the servlet kept throwing SocketTimeoutException while it was reading the data being deposited, so I had to investigate and solve the problem.
java.net.SocketTimeoutException: Read timed out
I read the Tomcat and DSpace logs but it revealed nothing. I noticed that DSpace had some interrupted deposits in it’s upload directory. All of the files were of size 2 GB, which was suspicious but I couldn’t figure out why at first, because I couldn’t see and find any limit that would explain why it should die at just 2 gigs.
I am not an Nginx expert, but I enabled debug logging and started reading logs. Unfortunately at first sight it didn’t reveal anything, I saw no errors, only that Tomcat returned 500 while depositing, that’s when the SocketTimeoutException was raised. However some lines caught my attention anyways.
2018/07/18 09:27:55 [debug] 4273#4273: *1 sendfile: @0 2147479552
2018/07/18 09:27:55 [debug] 4273#4273: *1 sendfile: 2147479552 of 2147479552 @0
That big integer was quite suspicous, and after doing some simple math I figured that 2147479552 twice divided by 1024 is 2048. Which means this could be a byte count. This made me start thinking. After sending this much data and some wait Tomcat sent 500 with that exception, so I figured it’s worth looking into. I started digging in Nginx’s source code and found a comment block and a constant below it:
* On Linux up to 2.4.21 sendfile() (syscall #187) works with 32-bit
* offsets only, and the including <sys/sendfile.h> breaks the compiling,
* if off_t is 64 bit wide. So we use own sendfile() definition, where offset
* parameter is int32_t, and use sendfile() for the file parts below 2G only,
* see src/os/unix/ngx_linux_config.h
* Linux 2.4.21 has the new sendfile64() syscall #239.
* On Linux up to 2.6.16 sendfile() does not allow to pass the count parameter
* more than 2G-1 bytes even on 64-bit platforms: it returns EINVAL,
* so we limit it to 2G-1 bytes.
#define NGX_SENDFILE_MAXSIZE 2147483647L
After some further digging I realized that this sendfile() call is the default network I/O implementation of Nginx, but it can be turned off by setting
in the http scope of the Nginx config file. As I suspected this solved the problem, and we could deposit the packages without problems. Now as a short summary here’s what this is about and what happened:
sendfile() is an I/O call that transfers data between file descriptors without having to first read the data into RAM, therefore it’s faster than the traditional solution of reading from the source, storing in RAM then writing to the destination. This is by default enabled in Nginx and this is what among other solutions makes Nginx a fast web server. However it has a limit of 2 GB. So when my co-worker was depositing his package, Nginx accepted the deposit, and sent it to Tomcat. The trouble was that it wouldn’t send all data. When it finished with the 2GB part of the 8 GB size file it just stopped, while Tomcat was still waiting for the rest of the data. After a short while it timed out, and returned an HTTP code of 500 to Nginx. Turning off sendfile() fixes this, as Nginx can now send all the data, however this makes network I/O slower.
At work we still use Tomcat 7 in production and I needed to set up debugging for various development systems. This article shows how to enable Tomcat 7 remote debugging
Enabling Tomcat 7 remote debugging via JDWP
I use Ubuntu 16.04 LTE so I’ll use that in the example, but other distros will not be that much different, except for the path and (re)starting the service of course.
- Edit or create the file /usr/share/tomcat7/bin/setenv.sh and put in the following content:
export JAVA_OPTS=”-Xdebug \
Note: Obviously if the file already exists and it already has some content, then just add the parameters instead of adding the entire line.
- Restart Tomcat
sudo service tomcat7 restart
- Go to the Tomcat binary directory, which is by default
c:\Program Files\Apache Software Foundation\Tomcat 7.0\bin
- Start the program Tomcat7w.exe
- Switch to the java tab and add the following lines to the Java options textbox:
Note: It is important that each of the parameters should be added on separate lines, and that lines should have no whitespaces in the end!
- Restarts Tomcat 7
net stop tomcat7
net start tomcat7
Attaching Netbeans debugger to Tomcat 7
Now that we have Tomcat running with the remote debugging on we can attach Netbeans to debug.
- Click debug – attach debugger, a dialog box will appear
- Select Java Debugger (JPDA) as the Debugger
- Select SocketAttach as Connector
- Fill in host / IP address to the host field
- Fill in port to the Port field, in this example the port is 8787, but obviously it can be any non-taken port
- Click OK
- If everything went OK the debugging tab should show up showing the running threads
…and that’s it! Happy bug hunting!
As you all probably know CppCheck is static code analyzer tool for C and C++. KDevelop has a plugin that provides a front-end for it, and the plugin is called kdev-cppcheck.
The good news is I’ve updated it’s GUI and now it uses the KDevelop Problem Checker Framework.
In the past it used to have it’s own toolview, where it showed issues in different formats (flat issue list, grouped by files, grouped by issue severity), based on the settings set in a KCM module.
Here’s a screenshot showing and example of this
What I’ve done is break up that KCM module, and create a per project settings window, and a general global settings window. The global settings window allows you to set the location of the cppcheck tool
The per project settings window allows one to set the rest: parameters, and what should be checked
Also the results area now shown in the problems toolview, just like problems found by the background parser, in it’s own tab.
Here’s a video showing the workflow
First of all let me introduce some concepts for readers who are unfamilair with them.
What is krayz2?
What is kdev-krazy2?
kdev-krazy is a plugin for KDevelop, that provides a frontend for Krazy2, so it can be run directly from KDevelop. The resulting issues also show up in KDevelop.
Up until now the plugin had it’s own toolview. That’s where settings could be changed, analysis started, and that’s where the issues showed up. Let’s see some screenshots!
The first one shows the main KDevelop window, with the plugin loaded, showing the krazy2 toolview docked in the bottom (fairly large picture, feel free to click).
Clicking either the “Select paths” or “Select checkers” buttons shows settings dialogs, not surprisingly you can select paths and chekers in them. The next 2 screenshots shows those.
Finally the result of the analysis is shown in the toolview
All this was in the past. Now the settings can be changed in the per project settings window
The analysis can be started from the Run menu.
The results show up in the problems toolview, the same way that problems detected by the background parser, in a separate tab
Here’s a video showing how it all works
I’m pleased to announce that the KDevelop Checker Framework has been pushed to the KDevPlatform repository. Here are some details about it:
- Moved ProblemModel to shell
- Reworked the Problems toolview. Now it works like this:
- ProblemModels are added to ProblemModelSet.
- ProblemReporterFactory makes instances of ProblemsView.
- ProblemsView takes the models from ProblemModelSet (also subscribes for updates about them, so if one is added or removed it can add/remove their views) and it provides a tabbed widget where the views for them can be added. It creates instances of ProblemTreeView which show the problems in ProblemModel, and adds them to the tabs. Also the tabs shows the number of problems in the ProblemModels.
- The toolview will only add actions that are supported by the model (for example: filtering, grouping, reparsing, showing imports. Obviously reparsing doesn’t make sense for runtime problem checkers)
See the video:
- First it shows that the “old” problem reporter still works as intended (which also uses the new code now)
- Then from 1:07 onward it shows an example problem model/view working with randomly generated test data.
- It shows the features of the new model(s), that is filtering by files/project and issue severity.
- It also shows the grouping support (grouping by severity, and path.
- Broke up ProblemModel into 2 parts
- Base ProblemModel that provides the QAbstractItemModel interface for views and can use various ProblemStores to store problems. By default it uses FilteredProblemStore.
- ProblemReporterModel is basically the old ProblemModel that grabs problems from DUChain, it’s a subclass of ProblemModel.
- ProblemStore simply stores problems as a list (well technically it stores them in a tree, but it only has 1 level, so it’s a list). There’s no filtering, no grouping. It’s perfect for ProblemReporterModel since it does filtering itself when grabbing the problems from DUChain.
- FilteredProblemStore DOES filtering, and grouping itself. It stores problems in a tree (ProblemStoreNode subclasses). The tree structure depends on the grouping method, which is implemented with GroupingStrategy subclasses.
- Moved WatchedDocumentSet and it’s subclasses from ProblemModel to ProblemStore, as it is really a detail that the model itself doesn’t need, however ProblemStore which stores the problems needs it actually.
- Created a new Problem class, DetectedProblem and moved both this and the “old” Problem class in under the IProblem interface. The intent here was to create a class with a clear interface for problems, which ProblemStore can simply store. I wanted to eventually clear the problems out of DUChain and replace the “old” Problem class with it. However I realized that it’s not practical because of the “show imports” feature which shows the problems from imported contexts. Unfortunately DUChain is the class that knows those, and it’s way too much work to get it out from it. Not to mention it doesn’t even make sense, since it’s really something that logically belongs there.
Using this new system is fairly straightforward:
All one has to do is instantiate a model, add it to the model set:
KDevelop::ILanguageController *lc = KDevelop::ICore::self()->languageController();
KDevelop::ProblemModelSet *pms = lc->problemModelSet();
m_model = new KDevelop::ProblemModel(this);
Then later inject problems into it:
KDevelop::DetectedProblem *p = new KDevelop::DetectedProblem();
Here’s a class diagram about the relevant classes:
Today I had to rescue an old server which was running Debian 3.0 Woody. Basically had to (or have to, since I’m still working on it) install a new server, and restore the users, and data, including mail. The easiest way to restore mail messages, mail folders, etc was to use the same IMAP server software, which is UW-Imapd. Unfortunately the latest LTS Ubuntu (14.04) doesn’t provide it, so I had to work around this problem.
It’s really not hard to do so, just had to take the following steps:
First of all, install dependencies
apt-get install inetutils-inetd libc-client2007e mlock
Then grab UW-Imapd from an earlier version of Ubuntu, from this page: http://packages.ubuntu.com/precise/uw-imapd
…and install it!
dpkg -i uw-imapd_2007e~dfsg-3.2ubuntu1_amd64.deb
Voilá! Now the mail can be read in the new system!
Good news everyone!
While Ubuntu doesn’t seem to have asterisk-gui in their repository. It’s certainly possible to get it working! All one has to do first is follow this guide.
When that’s done ( the installation part, basically check out from SVN, configure, build, install, no big deal ), you are just a few commands away from a working asterisk-gui!
Unfortunately the make install command installs the web site to the wrong place ( on asterisk at least ). So will have to correct that, and also correct the permissions afterwards:
ln -s /var/lib/asterisk/static-http /usr/share/asterisk/static-http
chmod asterisk /var/lib/asterisk -R
chgrp asterisk /var/lib/asterisk -R
Then restart Asterisk, and navigate to the following URL:
After logging in, and start-up configuration, you should see something similar to this:
That’s all folks!