Thanks Gunnar for responding. I was hoping that someone knew of additional server switches to bring more information into the log file (more than INFO). If not, maybe a blynk developer like Dmitriy would chime in.
What brought me down this road was implementing fail2ban. Its a utility that works with iptables (part of debian stretch install) to firewall ip’s trying to hack your system through open ports. Many evil people out there.
Default SSH uses port 22 but that can and should be reassigned since we have many to pick from. In addition, users typically forward 8080 and 9443 to their local server ip. Anyhow, fail2ban is a filter type scanner where you can use regex expressions to parse log files for certain key words. For instance, I was using DEBUG logging and found:
-> DEBUG- Unsecured connection attempt.
-> DEBUG- Error resolving url. No path found. GET : /robots.txt
The ip’s were from another country and trying to access the web interface on blynk. After getting over the shell shock, I started looking back to 2017 and found all kinds of attempts.
My next step was to increase the reporting content using log.level=trace on the server. What I seen next was even worse. It was then I wrote my filter to scan the blynk.log and ban ip’s doing these nefarious activities. The problem I ran into was blynk was not listing ip’s for the below log entries/hacking events. Without the ip information, fail2ban cannot firewall the ip. I read on this site,reponse from Dmitriy’s, that a server reset (below) was caused by hardware or router. While that is true, its more likely caused by hacking attempts. I proved this by logging port activity on ports 8080 & 9443 (iptables-syslog contained source ip). I then compared the syslog time with the blynk.log and the below events were caused by those illegal access attempts.
01:09:16.220 TRACE- Blynk server IOException.
java.io.IOException: Connection reset by peer
00:13:12.668 TRACE- HTTP connection detected.
00:13:12.669 TRACE- In http and websocket unificator handler.
NOTE: The above entries Do Not contain ip’s so fail2ban cannot flag them.
As you stated, help on this topic from the community might be niche BUT these activities are happening if your running a local blynk server open to the outside world. I’ve implemented a set of interfaces (fail2ban & scripts) that have addressed the issue with much success. My hope is Dmitriy would add ALL ip information into the blynk.log so fail2ban can scan and firewall these types of events. Security is everyone’s concern these days and all these robots are eating bandwidth which we pay for. Bottom line, having some standards for a firewall would be a plus for the local blynk server community.
- If your running a local server open to the outside world, robots & hacking attempts are happening. If your of the group …then stop reading and sleep well at night.
- After locking down admin web ip login, general security measures for hacking seem to end in blynk. SSL does not stop the attempts.
- Implementing a recommended firewall process for local blynk servers is easy with ip & full date logging. Yes, you will have a log size increase but simple crontab events can be set to clean the archive directory. Note: once fail2ban gets a hit, it has a DB that takes over tracking how long to ban it…blynk.log no longer needed.
If anyone is interested in more specifics, I’ll do my best to steer you in setting something up. Just for reference, I’m running a pi 3 and I’M NOT A UNIX GURU.
Regards and thanks again Gunner.