[Quick Links] |
Search Site

Site Details

rendered in 0.0465 secs

Captain's Log | Wednesday 4th of November 2015

How can I setup csf to block postfix auth failure IPs? | ConfigServerFirewall Config Server Firewall source

if (($lgfile eq $config{CUSTOM1_LOG}) and ($line =~ /^\S+\s+\d+\s+\S+ \S+ postfix\/smtpd\[\d+\]: warning:.*\[(\d+\.\d+\.\d+\.\d+)\]: SASL [A-Za-z]*? authentication failed/)) {
return ("Failed SASL login from",$1,"mysaslmatch","3","25","600");
}

CUSTOM1_LOG both point to /var/log/maillog (centos/rhel)
CUSTOM1_LOG both point to /var/log/mail.log (debian)

/usr/local/csf/bin/regex.custom.pm for centos
/etc/csf/regex.custom.pm for debian

change the custom_log location in /etc/csf/csf.conf (debian)

to restart csf
su
csf -r (however i think it's lfd you need to restart which i did via webmin)

Integration Tests


What I'm about to describe are not integration tests. But I couldn't find a better testing name so that will do.

For a project of mine which runs many daemons I have it call a model at the start and end of the process. At the start it updates the table to say it's running and updates time stamp, at the end up updates to say it isn't running any more and updates the time stamp.
  
Now why do I bring you sure an interesting story I hear you ask?
  
The script won't start if it's already running, so no time stamps get updated if it has previous run and failed/fatal error-ed.

I then have another daemon that runs and checks to see if the script was last update within the "checkin" time period which is also set in that table, a per script entry. If not, it sends out an email to say the script has failed and then changes the "running" status back to zero. This is so if it's a one off I won't keep getting emails every hour, but will be aware something went wrong at some stage in the day (maybe their API). Then if I keep receiving the alerts I know something has gone very wrong.

It's a great way to have "tests" or more accurately "status checks" ran on your application every hour, check to make sure everything is doing what it should. It was something I'd set up a while back and forgot all about. Then I started getting alerts, at first I blamed the API.

Then as the emails kept coming I ran the script manually and found a "duplicate primary key" SQL error which was causing the script to just stop running. Now the "code" was working perfectly but someone had done something to the database when taking a copy. And by someone I mean me. Now I wouldn't have been none the wiser as all unit tests and the site front end was working perfectly and no code had been changed in weeks.

But due to this little foresight and simple model/script runtime checker I was able to identify and easily resolve a problem which could have been buried for months. I promised I'd write this post to remind me of the reason "why" in the future.

So here it is.